Vulnerabilites related to NetApp - ONTAP Select Deploy administration utility
var-201901-0011
Vulnerability from variot
In OpenSSH 7.9, due to accepting and displaying arbitrary stderr output from the server, a malicious server (or Man-in-The-Middle attacker) can manipulate the client output, for example to use ANSI control codes to hide additional files being transferred. OpenSSH Contains an access control vulnerability.Information may be obtained and information may be altered. OpenSSH is prone to a security-bypass vulnerability. Successfully exploiting this issue may allow attackers to bypass certain security restrictions and perform unauthorized actions by conducting a man-in-the-middle attack. This may lead to other attacks. OpenSSH 7.9 version is vulnerable; other versions may also be affected. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 201903-16
https://security.gentoo.org/
Severity: Normal Title: OpenSSH: Multiple vulnerabilities Date: March 20, 2019 Bugs: #675520, #675522 ID: 201903-16
Synopsis
Multiple vulnerabilities have been found in OpenSSH, the worst of which could allow a remote attacker to gain unauthorized access. Please review the CVE identifiers referenced below for details.
Workaround
There is no known workaround at this time.
Resolution
All OpenSSH users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/openssh-7.9_p1-r4"
References
[ 1 ] CVE-2018-20685 https://nvd.nist.gov/vuln/detail/CVE-2018-20685 [ 2 ] CVE-2019-6109 https://nvd.nist.gov/vuln/detail/CVE-2019-6109 [ 3 ] CVE-2019-6110 https://nvd.nist.gov/vuln/detail/CVE-2019-6110 [ 4 ] CVE-2019-6111 https://nvd.nist.gov/vuln/detail/CVE-2019-6111
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/201903-16
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2019 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . scp client multiple vulnerabilities =================================== The latest version of this advisory is available at: https://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt
Overview
SCP clients from multiple vendors are susceptible to a malicious scp server performing unauthorized changes to target directory and/or client output manipulation.
Description
Many scp clients fail to verify if the objects returned by the scp server match those it asked for. This issue dates back to 1983 and rcp, on which scp is based. A separate flaw in the client allows the target directory attributes to be changed arbitrarily. Finally, two vulnerabilities in clients may allow server to spoof the client output.
Impact
Malicious scp server can write arbitrary files to scp target directory, change the target directory permissions and to spoof the client output.
Details
The discovered vulnerabilities, described in more detail below, enables the attack described here in brief.
-
The attacker controlled server or Man-in-the-Middle(*) attack drops .bash_aliases file to victim's home directory when the victim performs scp operation from the server. The transfer of extra files is hidden by sending ANSI control sequences via stderr. For example:
user@local:~$ scp user@remote:readme.txt . readme.txt 100% 494 1.6KB/s 00:00 user@local:~$
-
Once the victim launches a new shell, the malicious commands in .bash_aliases get executed.
*) Man-in-the-Middle attack does require the victim to accept the wrong host fingerprint.
Vulnerabilities
- CWE-20: scp client improper directory name validation [CVE-2018-20685]
The scp client allows server to modify permissions of the target directory by using empty ("D0777 0 \n") or dot ("D0777 0 .\n") directory name.
- CWE-20: scp client missing received object name validation [CVE-2019-6111]
Due to the scp implementation being derived from 1983 rcp [1], the server chooses which files/directories are sent to the client. However, scp client only perform cursory validation of the object name returned (only directory traversal attacks are prevented). A malicious scp server can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example overwrite .ssh/authorized_keys).
The same vulnerability in WinSCP is known as CVE-2018-20684.
Proof-of-Concept
Proof of concept malicious scp server will be released at a later date.
Vulnerable versions
The following software packages have some or all vulnerabilities:
ver #1 #2 #3 #4
OpenSSH scp <=7.9 x x x x PuTTY PSCP ? - - x x WinSCP scp mode <=5.13 - x - -
Tectia SSH scpg3 is not affected since it exclusively uses sftp protocol.
Mitigation
- OpenSSH
1.1 Switch to sftp if possible
1.2 Alternatively apply the following patch to harden scp against most server-side manipulation attempts: https://sintonen.fi/advisories/scp-name-validator.patch
NOTE: This patch may cause problems if the the remote and local shells don't
agree on the way glob() pattern matching works. YMMV.
- PuTTY
2.1 No fix is available yet
- WinSCP
3.1. Upgrade to WinSCP 5.14 or later
Similar or prior work
- CVE-2000-0992 - scp overwrites arbitrary files
References
- https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access
Credits
The vulnerability was discovered by Harry Sintonen / F-Secure Corporation.
Timeline
2018.08.08 initial discovery of vulnerabilities #1 and #2 2018.08.09 reported vulnerabilities #1 and #2 to OpenSSH 2018.08.10 OpenSSH acknowledged the vulnerabilities 2018.08.14 discovered & reported vulnerability #3 to OpenSSH 2018.08.15 discovered & reported vulnerability #4 to OpenSSH 2018.08.30 reported PSCP vulnerabilities (#3 and #4) to PuTTY developers 2018.08.31 reported WinSCP vulnerability (#2) to WinSCP developers 2018.09.04 WinSCP developers reported the vulnerability #2 fixed 2018.11.12 requested a status update from OpenSSH 2018.11.16 OpenSSH fixed vulnerability #1 2019.01.07 requested a status update from OpenSSH 2019.01.08 requested CVE assignments from MITRE 2019.01.10 received CVE assignments from MITRE 2019.01.11 public disclosure of the advisory 2019.01.14 added a warning about the potential issues caused by the patch
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201901-0011", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "openssh", "scope": "lte", "trust": 1.0, "vendor": "openbsd", "version": "7.9" }, { "model": "scalance x204rna", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.2.7" }, { "model": "ontap select deploy", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "winscp", "scope": "lte", "trust": 1.0, "vendor": "winscp", "version": "5.13" }, { "model": "storage automation store", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance x204rna eec", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.2.7" }, { "model": "element software", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "storage automation store", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "openssh", "scope": "eq", "trust": 0.8, "vendor": "openbsd", "version": "7.9" }, { "model": "winscp", "scope": null, "trust": 0.8, "vendor": "winscp", "version": null }, { "model": "linux enterprise server 12-sp2", "scope": null, "trust": 0.3, "vendor": "suse", "version": null }, { "model": "linux enterprise server 12-sp1", "scope": null, "trust": 0.3, "vendor": "suse", "version": null }, { "model": "linux enterprise server sp3", "scope": "eq", "trust": 0.3, "vendor": "suse", "version": "12" }, { "model": "linux enterprise server ga", "scope": "eq", "trust": 0.3, "vendor": "suse", "version": "12" }, { "model": "linux enterprise server sp4", "scope": "eq", "trust": 0.3, "vendor": "suse", "version": "11" }, { "model": "linux enterprise server sp3 ltss", "scope": "eq", "trust": 0.3, "vendor": "suse", "version": "11" }, { "model": "enterprise linux", "scope": "eq", "trust": 0.3, "vendor": "redhat", "version": "7" }, { "model": "enterprise linux", "scope": "eq", "trust": 0.3, "vendor": "redhat", "version": "6" }, { "model": "enterprise linux", "scope": "eq", "trust": 0.3, "vendor": "redhat", "version": "5" }, { "model": "openssh", "scope": "eq", "trust": 0.3, "vendor": "openssh", "version": "7.9" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "5.1" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "5.0" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "4.4" } ], "sources": [ { "db": "BID", "id": "106836" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "NVD", "id": "CVE-2019-6110" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openbsd:openssh:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.9", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:winscp:winscp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.13", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storage_automation_store:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_x204rna_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.2.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_x204rna:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_x204rna_eec_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.2.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_x204rna_eec:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-6110" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Harry Sintonen,Gentoo", "sources": [ { "db": "CNNVD", "id": "CNNVD-201901-468" } ], "trust": 0.6 }, "cve": "CVE-2019-6110", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "HIGH", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.0, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 4.9, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:H/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "High", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 4.0, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2019-6110", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:H/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.8, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.6, "impactScore": 5.2, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.8, "baseSeverity": "Medium", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2019-6110", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-6110", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-201901-468", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2019-6110", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "NVD", "id": "CVE-2019-6110" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In OpenSSH 7.9, due to accepting and displaying arbitrary stderr output from the server, a malicious server (or Man-in-The-Middle attacker) can manipulate the client output, for example to use ANSI control codes to hide additional files being transferred. OpenSSH Contains an access control vulnerability.Information may be obtained and information may be altered. OpenSSH is prone to a security-bypass vulnerability. \nSuccessfully exploiting this issue may allow attackers to bypass certain security restrictions and perform unauthorized actions by conducting a man-in-the-middle attack. This may lead to other attacks. \nOpenSSH 7.9 version is vulnerable; other versions may also be affected. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 201903-16\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: OpenSSH: Multiple vulnerabilities\n Date: March 20, 2019\n Bugs: #675520, #675522\n ID: 201903-16\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in OpenSSH, the worst of which\ncould allow a remote attacker to gain unauthorized access. Please review\nthe CVE identifiers referenced below for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSH users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/openssh-7.9_p1-r4\"\n\nReferences\n==========\n\n[ 1 ] CVE-2018-20685\n https://nvd.nist.gov/vuln/detail/CVE-2018-20685\n[ 2 ] CVE-2019-6109\n https://nvd.nist.gov/vuln/detail/CVE-2019-6109\n[ 3 ] CVE-2019-6110\n https://nvd.nist.gov/vuln/detail/CVE-2019-6110\n[ 4 ] CVE-2019-6111\n https://nvd.nist.gov/vuln/detail/CVE-2019-6111\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/201903-16\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2019 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. scp client multiple vulnerabilities\n===================================\nThe latest version of this advisory is available at:\nhttps://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt\n\n\nOverview\n--------\n\nSCP clients from multiple vendors are susceptible to a malicious scp server performing\nunauthorized changes to target directory and/or client output manipulation. \n\n\nDescription\n-----------\n\nMany scp clients fail to verify if the objects returned by the scp server match those\nit asked for. This issue dates back to 1983 and rcp, on which scp is based. A separate\nflaw in the client allows the target directory attributes to be changed arbitrarily. \nFinally, two vulnerabilities in clients may allow server to spoof the client output. \n\n\nImpact\n------\n\nMalicious scp server can write arbitrary files to scp target directory, change the\ntarget directory permissions and to spoof the client output. \n\n\nDetails\n-------\n\nThe discovered vulnerabilities, described in more detail below, enables the attack\ndescribed here in brief. \n\n1. The attacker controlled server or Man-in-the-Middle(*) attack drops .bash_aliases\n file to victim\u0027s home directory when the victim performs scp operation from the\n server. The transfer of extra files is hidden by sending ANSI control sequences\n via stderr. For example:\n\n user@local:~$ scp user@remote:readme.txt . \n readme.txt 100% 494 1.6KB/s 00:00\n user@local:~$\n\n2. Once the victim launches a new shell, the malicious commands in .bash_aliases get\n executed. \n\n\n*) Man-in-the-Middle attack does require the victim to accept the wrong host\n fingerprint. \n\n\nVulnerabilities\n---------------\n\n1. CWE-20: scp client improper directory name validation [CVE-2018-20685]\n\nThe scp client allows server to modify permissions of the target directory by using empty\n(\"D0777 0 \\n\") or dot (\"D0777 0 .\\n\") directory name. \n\n\n2. CWE-20: scp client missing received object name validation [CVE-2019-6111]\n\nDue to the scp implementation being derived from 1983 rcp [1], the server chooses which\nfiles/directories are sent to the client. However, scp client only perform cursory\nvalidation of the object name returned (only directory traversal attacks are prevented). \nA malicious scp server can overwrite arbitrary files in the scp client target directory. \nIf recursive operation (-r) is performed, the server can manipulate subdirectories\nas well (for example overwrite .ssh/authorized_keys). \n\nThe same vulnerability in WinSCP is known as CVE-2018-20684. \n\n\n3. \n\n\n4. \n\n\nProof-of-Concept\n----------------\n\nProof of concept malicious scp server will be released at a later date. \n\n\nVulnerable versions\n-------------------\n\nThe following software packages have some or all vulnerabilities:\n\n ver #1 #2 #3 #4\nOpenSSH scp \u003c=7.9 x x x x\nPuTTY PSCP ? - - x x\nWinSCP scp mode \u003c=5.13 - x - -\n\nTectia SSH scpg3 is not affected since it exclusively uses sftp protocol. \n\n\nMitigation\n----------\n\n1. OpenSSH\n\n1.1 Switch to sftp if possible\n\n1.2 Alternatively apply the following patch to harden scp against most server-side\n manipulation attempts: https://sintonen.fi/advisories/scp-name-validator.patch\n\n NOTE: This patch may cause problems if the the remote and local shells don\u0027t\n agree on the way glob() pattern matching works. YMMV. \n\n2. PuTTY\n\n2.1 No fix is available yet\n\n3. WinSCP\n\n3.1. Upgrade to WinSCP 5.14 or later\n\n\n\nSimilar or prior work\n---------------------\n\n1. CVE-2000-0992 - scp overwrites arbitrary files\n\n\nReferences\n----------\n\n1. https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access\n\n\nCredits\n-------\n\nThe vulnerability was discovered by Harry Sintonen / F-Secure Corporation. \n\n\nTimeline\n--------\n\n2018.08.08 initial discovery of vulnerabilities #1 and #2\n2018.08.09 reported vulnerabilities #1 and #2 to OpenSSH\n2018.08.10 OpenSSH acknowledged the vulnerabilities\n2018.08.14 discovered \u0026 reported vulnerability #3 to OpenSSH\n2018.08.15 discovered \u0026 reported vulnerability #4 to OpenSSH\n2018.08.30 reported PSCP vulnerabilities (#3 and #4) to PuTTY developers\n2018.08.31 reported WinSCP vulnerability (#2) to WinSCP developers\n2018.09.04 WinSCP developers reported the vulnerability #2 fixed\n2018.11.12 requested a status update from OpenSSH\n2018.11.16 OpenSSH fixed vulnerability #1\n2019.01.07 requested a status update from OpenSSH\n2019.01.08 requested CVE assignments from MITRE\n2019.01.10 received CVE assignments from MITRE\n2019.01.11 public disclosure of the advisory\n2019.01.14 added a warning about the potential issues caused by the patch\n\n\n", "sources": [ { "db": "NVD", "id": "CVE-2019-6110" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "BID", "id": "106836" }, { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "151175" } ], "trust": 2.16 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://vulmon.com/exploitdetails?qidtp=exploitdb\u0026qid=46193", "trust": 0.2, "type": "exploit" } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-6110" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-6110", "trust": 3.0 }, { "db": "EXPLOIT-DB", "id": "46193", "trust": 1.7 }, { "db": "SIEMENS", "id": "SSA-412672", "trust": 1.7 }, { "db": "JVNDB", "id": "JVNDB-2019-001595", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "152154", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2019.1633", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0346.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0346.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.2671", "trust": 0.6 }, { "db": "EXPLOIT-DB", "id": "46516", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-201901-468", "trust": 0.6 }, { "db": "BID", "id": "106836", "trust": 0.3 }, { "db": "ICS CERT", "id": "ICSA-22-349-21", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2019-6110", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "151175", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "BID", "id": "106836" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "NVD", "id": "CVE-2019-6110" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "id": "VAR-201901-0011", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.6178670799999999 }, "last_update_date": "2023-12-18T11:24:38.696000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NTAP-20190213-0001", "trust": 0.8, "url": "https://security.netapp.com/advisory/ntap-20190213-0001/" }, { "title": "CVS log for src/usr.bin/ssh/progressmeter.c", "trust": 0.8, "url": "https://cvsweb.openbsd.org/src/usr.bin/ssh/progressmeter.c" }, { "title": "CVS log for src/usr.bin/ssh/scp.c", "trust": 0.8, "url": "https://cvsweb.openbsd.org/src/usr.bin/ssh/scp.c" }, { "title": "Top Page", "trust": 0.8, "url": "https://winscp.net/eng/index.php" }, { "title": "OpenSSH Security vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=88612" }, { "title": "The Register", "trust": 0.2, "url": "https://www.theregister.co.uk/2019/01/15/scp_vulnerability/" }, { "title": "Debian CVElist Bug Report Logs: CVE-2019-6111 not fixed, file transfer of unwanted files by malicious SSH server still possible", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=74b791ca4fdf54c27d2b50ef6845ef8e" }, { "title": "IBM: IBM Security Bulletin: IBM DataPower Gateway is affected by a message spoofing vulnerability (CVE-2019-6110)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=2211d00f1dec75d45567fcf2f195085b" }, { "title": "Debian CVElist Bug Report Logs: openssh: CVE-2018-20685: scp.c in the scp client allows remote SSH servers to bypass intended access restrictions", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=8394bb17731a99ef76b185cbc70acfa3" }, { "title": "IBM: IBM Security Bulletin: Vulnerabilities in OpenSSH affect AIX (CVE-2018-20685 CVE-2018-6109 CVE-2018-6110 CVE-2018-6111) Security Bulletin", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=50a54c2fb43b489f64442dcf4f25bc3b" }, { "title": "IBM: Security Bulletin: IBM Cloud Pak for Security is vulnerable to using components with known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=11f5d971f7d860c9a65bb387cd7c4b76" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2019-6110 " }, { "title": "", "trust": 0.1, "url": "https://github.com/h4xrox/direct-admin-vulnerability-disclosure " }, { "title": "DC-4-Vulnhub-Walkthrough", "trust": 0.1, "url": "https://github.com/vshaliii/dc-4-vulnhub-walkthrough " }, { "title": "nmap", "trust": 0.1, "url": "https://github.com/devairdarolt/nmap " }, { "title": "iot-cves", "trust": 0.1, "url": "https://github.com/inesmartins31/iot-cves " }, { "title": "Funbox2-rookie", "trust": 0.1, "url": "https://github.com/vaishali1998/funbox2-rookie " }, { "title": "Basic-Pentesting-2-Vulnhub-Walkthrough", "trust": 0.1, "url": "https://github.com/vshaliii/basic-pentesting-2-vulnhub-walkthrough " }, { "title": "Basic-Pentesting-2", "trust": 0.1, "url": "https://github.com/vshaliii/basic-pentesting-2 " } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-838", "trust": 1.0 }, { "problemtype": "CWE-284", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "NVD", "id": "CVE-2019-6110" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.1, "url": "https://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/201903-16" }, { "trust": 1.7, "url": "https://cvsweb.openbsd.org/src/usr.bin/ssh/scp.c" }, { "trust": 1.7, "url": "https://cvsweb.openbsd.org/src/usr.bin/ssh/progressmeter.c" }, { "trust": 1.7, "url": "https://www.exploit-db.com/exploits/46193/" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20190213-0001/" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-412672.pdf" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6110" }, { "trust": 0.8, "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-6110" }, { "trust": 0.6, "url": "http://www.ibm.com/support/docview.wss?uid=ibm10872060" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/152154/gentoo-linux-security-advisory-201903-16.html" }, { "trust": 0.6, "url": "https://www-01.ibm.com/support/docview.wss?uid=ibm10872060" }, { "trust": 0.6, "url": "https://www.exploit-db.com/exploits/46516" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/80574" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1143460" }, { "trust": 0.6, "url": "https://www.ibm.com/support/docview.wss?uid=ibm10960177" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.2671/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.0346.2/" }, { "trust": 0.6, "url": "https://www-01.ibm.com/support/docview.wss?uid=ibm10883886" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.0346.3/" }, { "trust": 0.3, "url": "http://www.openssh.org/" }, { "trust": 0.3, "url": "https://support.f5.com/csp/article/k42531048" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-6110" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6111" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6109" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20685" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/838.html" }, { "trust": 0.1, "url": "https://tools.cisco.com/security/center/viewalert.x?alertid=59543" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.exploit-db.com/exploits/46193" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-349-21" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20684" }, { "trust": 0.1, "url": "https://sintonen.fi/advisories/scp-name-validator.patch" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2000-0992" }, { "trust": 0.1, "url": "https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access" } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "BID", "id": "106836" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "NVD", "id": "CVE-2019-6110" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2019-6110" }, { "db": "BID", "id": "106836" }, { "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "NVD", "id": "CVE-2019-6110" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-01-31T00:00:00", "db": "VULMON", "id": "CVE-2019-6110" }, { "date": "2018-11-16T00:00:00", "db": "BID", "id": "106836" }, { "date": "2019-03-15T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "date": "2019-03-20T16:09:02", "db": "PACKETSTORM", "id": "152154" }, { "date": "2019-01-16T15:04:39", "db": "PACKETSTORM", "id": "151175" }, { "date": "2019-01-31T18:29:00.807000", "db": "NVD", "id": "CVE-2019-6110" }, { "date": "2019-01-15T00:00:00", "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-23T00:00:00", "db": "VULMON", "id": "CVE-2019-6110" }, { "date": "2018-11-16T00:00:00", "db": "BID", "id": "106836" }, { "date": "2019-03-15T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-001595" }, { "date": "2023-02-23T23:29:26.993000", "db": "NVD", "id": "CVE-2019-6110" }, { "date": "2022-12-14T00:00:00", "db": "CNNVD", "id": "CNNVD-201901-468" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "152154" }, { "db": "CNNVD", "id": "CNNVD-201901-468" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSH Access control vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-001595" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "access control error", "sources": [ { "db": "CNNVD", "id": "CNNVD-201901-468" } ], "trust": 0.6 } }
var-201912-1378
Vulnerability from variot
SQLite 3.30.1 mishandles certain SELECT statements with a nonexistent VIEW, leading to an application crash. It exists that SQLite incorrectly handled certain corruped schemas. An attacker could possibly use this issue to cause a denial of service. This issue only affected Ubuntu 18.04 LTS. (CVE-2018-8740). Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: sqlite security update Advisory ID: RHSA-2021:4396-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:4396 Issue date: 2021-11-09 CVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-19603 CVE-2020-13435 ==================================================================== 1. Summary:
An update for sqlite is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
SQLite is a C library that implements an SQL database engine. A large subset of SQL92 is supported. A complete database is stored in a single disk file. The API is designed for convenience and ease of use. Applications that link against SQLite can enjoy the power and flexibility of an SQL database without the administrative hassles of supporting a separate database server.
Security Fix(es):
-
sqlite: out-of-bounds access due to the use of 32-bit memory allocator interfaces (CVE-2019-5827)
-
sqlite: dropping of shadow tables not restricted in defensive mode (CVE-2019-13750)
-
sqlite: fts3: improve detection of corrupted records (CVE-2019-13751)
-
sqlite: mishandling of certain SELECT statements with non-existent VIEW can lead to DoS (CVE-2019-19603)
-
sqlite: NULL pointer dereference in sqlite3ExprCodeTarget() (CVE-2020-13435)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1706805 - CVE-2019-5827 sqlite: out-of-bounds access due to the use of 32-bit memory allocator interfaces 1781997 - CVE-2019-13750 sqlite: dropping of shadow tables not restricted in defensive mode 1781998 - CVE-2019-13751 sqlite: fts3: improve detection of corrupted records 1785318 - CVE-2019-19603 sqlite: mishandling of certain SELECT statements with non-existent VIEW can lead to DoS 1841231 - CVE-2020-13435 sqlite: NULL pointer dereference in sqlite3ExprCodeTarget()
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
aarch64: lemon-3.26.0-15.el8.aarch64.rpm lemon-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-debugsource-3.26.0-15.el8.aarch64.rpm sqlite-libs-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.aarch64.rpm
ppc64le: lemon-3.26.0-15.el8.ppc64le.rpm lemon-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-debugsource-3.26.0-15.el8.ppc64le.rpm sqlite-libs-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.ppc64le.rpm
s390x: lemon-3.26.0-15.el8.s390x.rpm lemon-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-debugsource-3.26.0-15.el8.s390x.rpm sqlite-libs-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.s390x.rpm
x86_64: lemon-3.26.0-15.el8.x86_64.rpm lemon-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-debugsource-3.26.0-15.el8.x86_64.rpm sqlite-libs-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 8):
Source: sqlite-3.26.0-15.el8.src.rpm
aarch64: lemon-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-3.26.0-15.el8.aarch64.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-debugsource-3.26.0-15.el8.aarch64.rpm sqlite-devel-3.26.0-15.el8.aarch64.rpm sqlite-libs-3.26.0-15.el8.aarch64.rpm sqlite-libs-debuginfo-3.26.0-15.el8.aarch64.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.aarch64.rpm
noarch: sqlite-doc-3.26.0-15.el8.noarch.rpm
ppc64le: lemon-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-3.26.0-15.el8.ppc64le.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-debugsource-3.26.0-15.el8.ppc64le.rpm sqlite-devel-3.26.0-15.el8.ppc64le.rpm sqlite-libs-3.26.0-15.el8.ppc64le.rpm sqlite-libs-debuginfo-3.26.0-15.el8.ppc64le.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.ppc64le.rpm
s390x: lemon-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-3.26.0-15.el8.s390x.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-debugsource-3.26.0-15.el8.s390x.rpm sqlite-devel-3.26.0-15.el8.s390x.rpm sqlite-libs-3.26.0-15.el8.s390x.rpm sqlite-libs-debuginfo-3.26.0-15.el8.s390x.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.s390x.rpm
x86_64: lemon-debuginfo-3.26.0-15.el8.i686.rpm lemon-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-3.26.0-15.el8.i686.rpm sqlite-3.26.0-15.el8.x86_64.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.i686.rpm sqlite-analyzer-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-debuginfo-3.26.0-15.el8.i686.rpm sqlite-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-debugsource-3.26.0-15.el8.i686.rpm sqlite-debugsource-3.26.0-15.el8.x86_64.rpm sqlite-devel-3.26.0-15.el8.i686.rpm sqlite-devel-3.26.0-15.el8.x86_64.rpm sqlite-libs-3.26.0-15.el8.i686.rpm sqlite-libs-3.26.0-15.el8.x86_64.rpm sqlite-libs-debuginfo-3.26.0-15.el8.i686.rpm sqlite-libs-debuginfo-3.26.0-15.el8.x86_64.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.i686.rpm sqlite-tcl-debuginfo-3.26.0-15.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYYrcp9zjgjWX9erEAQh4VRAAjQa5rkkS0W4z5i8wkU7fmG5l2rfSAOzu ZuhbW2qZ0rGM60jVIkbin6Mw2corOw7FUIWUFxbqv0uD68HFnD9nS+D6DH9nDlJw WsPw6cZnNYhIl4HotGR+34w0mf+5Ld3yJMbAujT7avKV5RMb/qcsr8B42EF1ZX5F tcyriGtur+rKfDOPdeOtZZxTXFAmrlJftwiMViTskZPINmfoT4nutMv4WHCevEu7 cEDJih1x+UsS4cOPfeqBNFYxIFIZun0f6W9VWGZSOz/s06FDbuNY60/tLulU9jDx JzAwKKl1P/nK1u8fKD0prFmsQluqR7fbrpLEbxz3jdK+nRTaxNrni99PYbJhVG9o krCC7AwmSLFH2nGTyOU+/U81yrba5BYXEsb576CM4n0wtumtDJ6n9EITAt7JB90D iS53SxBkZH0YXhAe3vrzu7m8Snz/5wX2eeN1kSfZDMg57xil0tmvLdCtBaVw6sGs ehv5N9tGT+tvCz9BhXdhsbCJWyuFKaQ0XbZmRSrgHrkTZoOdgtTsmJ8tZ1xeFBeS YmS0qXEfAAChNzU4YKhe/JYIdEr6D2mILe1Ojcj6b6m4ja7xJmvPmnv4j2Qt1A21 R+TOyTEHp12WxFo8QlX0o/F1wMrluR4Nss5YXPCmpkpntlaXBg8n5tcPmq5Vb9kg u4IzYbfFiTQ=6X4Z -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.2.10 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments.
Clusters and applications are all visible and managed from a single console — with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/
Security fixes:
-
CVE-2021-3795 semver-regex: inefficient regular expression complexity
-
CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
Related bugs:
-
RHACM 2.2.10 images (Bugzilla #2013652)
-
Bugs fixed (https://bugzilla.redhat.com/):
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747 2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity 2013652 - RHACM 2.2.10 images
- Description:
Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.
Bug Fix(es):
-
Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)
-
Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):
1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input
- Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1857 - OpenShift Alerting Rules Style-Guide Compliance LOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM LOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
- Bugs fixed (https://bugzilla.redhat.com/):
1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2042536 - OCP 4.10: nfd-topology-updater daemonset fails to get created on worker nodes - forbidden: unable to validate against any security context constraint
2042652 - Unable to deploy hw-event-proxy operator
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047308 - Remove metrics and events for master port offsets
2055049 - No pre-caching for NFD images
2055436 - nfd-master tracking the wrong api group
2055439 - nfd-master tracking the wrong api group (operand)
2057569 - nfd-worker: drop 'custom-' prefix from matchFeatures custom rules
2058256 - LeaseDuration for NFD Operator seems to be rather small, causing Operator restarts when running etcd defrag
2062849 - hw event proxy is not binding on ipv6 local address
2066860 - Wrong spec in NFD documentation under operand
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2067312 - PPT event source is lost when received by the consumer
2077243 - NFD os release label lost after upgrade to ocp 4.10.6
2087511 - NFD SkipRange is wrong causing OLM install problems
2089962 - Node feature Discovery operator installation failed.
2090774 - Add Readme to plugin directory
2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1378", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sqlite", "scope": "eq", "trust": 1.0, "vendor": "sqlite", "version": "3.30.1" }, { "model": "guacamole", "scope": "eq", "trust": 1.0, "vendor": "apache", "version": "1.3.0" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.19" }, { "model": "sinec infrastructure network services", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec infrastructure network services", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19603" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:sqlite:sqlite:3.30.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.19", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:1.0.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:apache:guacamole:1.3.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19603" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "164829" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168036" } ], "trust": 1.0 }, "cve": "CVE-2019-19603", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "CVE-2019-19603", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-19603", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2019-19603", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19603" }, { "db": "NVD", "id": "CVE-2019-19603" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "SQLite 3.30.1 mishandles certain SELECT statements with a nonexistent VIEW, leading to an application crash. It exists that SQLite incorrectly handled certain corruped schemas. \nAn attacker could possibly use this issue to cause a denial of service. \nThis issue only affected Ubuntu 18.04 LTS. (CVE-2018-8740). Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: sqlite security update\nAdvisory ID: RHSA-2021:4396-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4396\nIssue date: 2021-11-09\nCVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-19603 CVE-2020-13435\n====================================================================\n1. Summary:\n\nAn update for sqlite is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nSQLite is a C library that implements an SQL database engine. A large\nsubset of SQL92 is supported. A complete database is stored in a single\ndisk file. The API is designed for convenience and ease of use. \nApplications that link against SQLite can enjoy the power and flexibility\nof an SQL database without the administrative hassles of supporting a\nseparate database server. \n\nSecurity Fix(es):\n\n* sqlite: out-of-bounds access due to the use of 32-bit memory allocator\ninterfaces (CVE-2019-5827)\n\n* sqlite: dropping of shadow tables not restricted in defensive mode\n(CVE-2019-13750)\n\n* sqlite: fts3: improve detection of corrupted records (CVE-2019-13751)\n\n* sqlite: mishandling of certain SELECT statements with non-existent VIEW\ncan lead to DoS (CVE-2019-19603)\n\n* sqlite: NULL pointer dereference in sqlite3ExprCodeTarget()\n(CVE-2020-13435)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1706805 - CVE-2019-5827 sqlite: out-of-bounds access due to the use of 32-bit memory allocator interfaces\n1781997 - CVE-2019-13750 sqlite: dropping of shadow tables not restricted in defensive mode\n1781998 - CVE-2019-13751 sqlite: fts3: improve detection of corrupted records\n1785318 - CVE-2019-19603 sqlite: mishandling of certain SELECT statements with non-existent VIEW can lead to DoS\n1841231 - CVE-2020-13435 sqlite: NULL pointer dereference in sqlite3ExprCodeTarget()\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\naarch64:\nlemon-3.26.0-15.el8.aarch64.rpm\nlemon-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-debugsource-3.26.0-15.el8.aarch64.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.aarch64.rpm\n\nppc64le:\nlemon-3.26.0-15.el8.ppc64le.rpm\nlemon-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-debugsource-3.26.0-15.el8.ppc64le.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.ppc64le.rpm\n\ns390x:\nlemon-3.26.0-15.el8.s390x.rpm\nlemon-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-debugsource-3.26.0-15.el8.s390x.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.s390x.rpm\n\nx86_64:\nlemon-3.26.0-15.el8.x86_64.rpm\nlemon-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-debugsource-3.26.0-15.el8.x86_64.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nsqlite-3.26.0-15.el8.src.rpm\n\naarch64:\nlemon-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-3.26.0-15.el8.aarch64.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-debugsource-3.26.0-15.el8.aarch64.rpm\nsqlite-devel-3.26.0-15.el8.aarch64.rpm\nsqlite-libs-3.26.0-15.el8.aarch64.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.aarch64.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.aarch64.rpm\n\nnoarch:\nsqlite-doc-3.26.0-15.el8.noarch.rpm\n\nppc64le:\nlemon-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-3.26.0-15.el8.ppc64le.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-debugsource-3.26.0-15.el8.ppc64le.rpm\nsqlite-devel-3.26.0-15.el8.ppc64le.rpm\nsqlite-libs-3.26.0-15.el8.ppc64le.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.ppc64le.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.ppc64le.rpm\n\ns390x:\nlemon-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-3.26.0-15.el8.s390x.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-debugsource-3.26.0-15.el8.s390x.rpm\nsqlite-devel-3.26.0-15.el8.s390x.rpm\nsqlite-libs-3.26.0-15.el8.s390x.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.s390x.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.s390x.rpm\n\nx86_64:\nlemon-debuginfo-3.26.0-15.el8.i686.rpm\nlemon-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-3.26.0-15.el8.i686.rpm\nsqlite-3.26.0-15.el8.x86_64.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.i686.rpm\nsqlite-analyzer-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-debuginfo-3.26.0-15.el8.i686.rpm\nsqlite-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-debugsource-3.26.0-15.el8.i686.rpm\nsqlite-debugsource-3.26.0-15.el8.x86_64.rpm\nsqlite-devel-3.26.0-15.el8.i686.rpm\nsqlite-devel-3.26.0-15.el8.x86_64.rpm\nsqlite-libs-3.26.0-15.el8.i686.rpm\nsqlite-libs-3.26.0-15.el8.x86_64.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.i686.rpm\nsqlite-libs-debuginfo-3.26.0-15.el8.x86_64.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.i686.rpm\nsqlite-tcl-debuginfo-3.26.0-15.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYYrcp9zjgjWX9erEAQh4VRAAjQa5rkkS0W4z5i8wkU7fmG5l2rfSAOzu\nZuhbW2qZ0rGM60jVIkbin6Mw2corOw7FUIWUFxbqv0uD68HFnD9nS+D6DH9nDlJw\nWsPw6cZnNYhIl4HotGR+34w0mf+5Ld3yJMbAujT7avKV5RMb/qcsr8B42EF1ZX5F\ntcyriGtur+rKfDOPdeOtZZxTXFAmrlJftwiMViTskZPINmfoT4nutMv4WHCevEu7\ncEDJih1x+UsS4cOPfeqBNFYxIFIZun0f6W9VWGZSOz/s06FDbuNY60/tLulU9jDx\nJzAwKKl1P/nK1u8fKD0prFmsQluqR7fbrpLEbxz3jdK+nRTaxNrni99PYbJhVG9o\nkrCC7AwmSLFH2nGTyOU+/U81yrba5BYXEsb576CM4n0wtumtDJ6n9EITAt7JB90D\niS53SxBkZH0YXhAe3vrzu7m8Snz/5wX2eeN1kSfZDMg57xil0tmvLdCtBaVw6sGs\nehv5N9tGT+tvCz9BhXdhsbCJWyuFKaQ0XbZmRSrgHrkTZoOdgtTsmJ8tZ1xeFBeS\nYmS0qXEfAAChNzU4YKhe/JYIdEr6D2mILe1Ojcj6b6m4ja7xJmvPmnv4j2Qt1A21\nR+TOyTEHp12WxFo8QlX0o/F1wMrluR4Nss5YXPCmpkpntlaXBg8n5tcPmq5Vb9kg\nu4IzYbfFiTQ=6X4Z\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.10 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. \n\nClusters and applications are all visible and managed from a single console\n\u2014 with security policy built in. See the following Release Notes documentation, which\nwill be updated shortly for this release, for additional details about this\nrelease:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-3795 semver-regex: inefficient regular expression complexity\n\n* CVE-2021-23440 nodejs-set-value: type confusion allows bypass of\nCVE-2019-10747\n\nRelated bugs: \n\n* RHACM 2.2.10 images (Bugzilla #2013652)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity\n2013652 - RHACM 2.2.10 images\n\n5. Description:\n\nRed Hat OpenShift Container Storage is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1857 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1904 - [release-5.2] Fix the Display of ClusterLogging type in OLM\nLOG-1916 - [release-5.2] Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. \n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2042536 - OCP 4.10: nfd-topology-updater daemonset fails to get created on worker nodes - forbidden: unable to validate against any security context constraint\n2042652 - Unable to deploy hw-event-proxy operator\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047308 - Remove metrics and events for master port offsets\n2055049 - No pre-caching for NFD images\n2055436 - nfd-master tracking the wrong api group\n2055439 - nfd-master tracking the wrong api group (operand)\n2057569 - nfd-worker: drop \u0027custom-\u0027 prefix from matchFeatures custom rules\n2058256 - LeaseDuration for NFD Operator seems to be rather small, causing Operator restarts when running etcd defrag\n2062849 - hw event proxy is not binding on ipv6 local address\n2066860 - Wrong spec in NFD documentation under `operand`\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2067312 - PPT event source is lost when received by the consumer\n2077243 - NFD os release label lost after upgrade to ocp 4.10.6\n2087511 - NFD SkipRange is wrong causing OLM install problems\n2089962 - Node feature Discovery operator installation failed. \n2090774 - Add Readme to plugin directory\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2019-19603" }, { "db": "VULMON", "id": "CVE-2019-19603" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "164829" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168036" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-19603", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-22-069-09", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2019-19603", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164829", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166309", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165096", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165002", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165758", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168036", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19603" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "164829" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168036" }, { "db": "NVD", "id": "CVE-2019-19603" } ] }, "id": "VAR-201912-1378", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.30092594 }, "last_update_date": "2024-07-23T20:30:53.083000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Ubuntu Security Notice: sqlite3 vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-4394-1" }, { "title": "Brocade Security Advisories: Access Denied", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=bbc2d81915e62aea24eef98c1d809792" }, { "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.20.0", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220434 - security advisory" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2 director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220842 - security advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221081 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift distributed tracing 2.1.0 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220318 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift GitOps security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220580 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220856 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.0 extras and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225070 - security advisory" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.5.4 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221396 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=4a9822530e6b610875f83ffc10e02aba" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "xyz-solutions", "trust": 0.1, "url": "https://github.com/sauliuspr/xyz-solutions " }, { "title": "snykout", "trust": 0.1, "url": "https://github.com/garethr/snykout " }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19603" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-noinfo", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19603" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://usn.ubuntu.com/4394-1/" }, { "trust": 1.1, "url": "https://github.com/sqlite/sqlite/commit/527cbd4a104cb93bf3994b3dd3619a6299a78b13" }, { "trust": 1.1, "url": "https://www.sqlite.org/" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20191223-0001/" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2020.html" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/rc713534b10f9daeee2e0990239fa407e2118e4aa9e88a7041177497c%40%3cissues.guacamole.apache.org%3e" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 1.0, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 1.0, "url": "https://bugzilla.redhat.com/):" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 1.0, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.8, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.4, "url": "https://issues.jboss.org/):" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.2, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-069-09" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5128" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5129" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4396" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3575" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30758" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5727" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30689" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30682" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-18032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1801" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30795" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30744" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30797" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21779" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27828" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1871" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29338" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26926" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1789" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30663" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3272" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25710" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4122" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0856" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25214" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4019" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4192" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3984" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4193" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25214" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3872" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36385" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5038" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3795" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26301" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28957" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4032" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distr_tracing_install/distr-tracing-updating.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distributed-tracing-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0318" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29162" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1706" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18874" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18874" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5070" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19603" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "164829" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168036" }, { "db": "NVD", "id": "CVE-2019-19603" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2019-19603" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "164829" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168036" }, { "db": "NVD", "id": "CVE-2019-19603" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-12-09T00:00:00", "db": "VULMON", "id": "CVE-2019-19603" }, { "date": "2021-12-15T15:20:33", "db": "PACKETSTORM", "id": "165286" }, { "date": "2021-12-15T15:22:36", "db": "PACKETSTORM", "id": "165288" }, { "date": "2021-11-10T17:03:12", "db": "PACKETSTORM", "id": "164829" }, { "date": "2022-01-20T17:48:29", "db": "PACKETSTORM", "id": "165631" }, { "date": "2022-03-15T15:44:21", "db": "PACKETSTORM", "id": "166309" }, { "date": "2021-12-09T14:50:37", "db": "PACKETSTORM", "id": "165209" }, { "date": "2021-11-29T18:12:32", "db": "PACKETSTORM", "id": "165096" }, { "date": "2021-11-17T15:25:40", "db": "PACKETSTORM", "id": "165002" }, { "date": "2022-01-28T14:33:13", "db": "PACKETSTORM", "id": "165758" }, { "date": "2022-08-10T15:54:58", "db": "PACKETSTORM", "id": "168036" }, { "date": "2019-12-09T19:15:14.710000", "db": "NVD", "id": "CVE-2019-19603" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2019-19603" }, { "date": "2023-11-07T03:07:43.340000", "db": "NVD", "id": "CVE-2019-19603" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-5128-06", "sources": [ { "db": "PACKETSTORM", "id": "165286" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" } ], "trust": 0.4 } }
var-201901-1500
Vulnerability from variot
In OpenSSH 7.9, scp.c in the scp client allows remote SSH servers to bypass intended access restrictions via the filename of . or an empty filename. The impact is modifying the permissions of the target directory on the client side. OpenSSH Contains an access control vulnerability.Information may be tampered with. OpenSSH is prone to an access-bypass vulnerability. An attacker can exploit this issue to bypass certain security restrictions and perform unauthorized actions; this may aid in launching further attacks. OpenSSH version 7.9 is vulnerable. ========================================================================== Ubuntu Security Notice USN-3885-1 February 07, 2019
openssh vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.10
- Ubuntu 18.04 LTS
- Ubuntu 16.04 LTS
- Ubuntu 14.04 LTS
Summary:
Several security issues were fixed in OpenSSH.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.10: openssh-client 1:7.7p1-4ubuntu0.2
Ubuntu 18.04 LTS: openssh-client 1:7.6p1-4ubuntu0.2
Ubuntu 16.04 LTS: openssh-client 1:7.2p2-4ubuntu2.7
Ubuntu 14.04 LTS: openssh-client 1:6.6p1-2ubuntu2.12
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 201903-16
https://security.gentoo.org/
Severity: Normal Title: OpenSSH: Multiple vulnerabilities Date: March 20, 2019 Bugs: #675520, #675522 ID: 201903-16
Synopsis
Multiple vulnerabilities have been found in OpenSSH, the worst of which could allow a remote attacker to gain unauthorized access.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/openssh < 7.9_p1-r4 >= 7.9_p1-r4
Description
Multiple vulnerabilities have been discovered in OpenSSH. Please review the CVE identifiers referenced below for details.
Workaround
There is no known workaround at this time.
Resolution
All OpenSSH users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/openssh-7.9_p1-r4"
References
[ 1 ] CVE-2018-20685 https://nvd.nist.gov/vuln/detail/CVE-2018-20685 [ 2 ] CVE-2019-6109 https://nvd.nist.gov/vuln/detail/CVE-2019-6109 [ 3 ] CVE-2019-6110 https://nvd.nist.gov/vuln/detail/CVE-2019-6110 [ 4 ] CVE-2019-6111 https://nvd.nist.gov/vuln/detail/CVE-2019-6111
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/201903-16
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2019 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: openssh security, bug fix, and enhancement update Advisory ID: RHSA-2019:3702-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2019:3702 Issue date: 2019-11-05 CVE Names: CVE-2018-20685 CVE-2019-6109 CVE-2019-6111 =====================================================================
- Summary:
An update for openssh is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
OpenSSH is an SSH protocol implementation supported by a number of Linux, UNIX, and similar operating systems. It includes the core files necessary for both the OpenSSH client and server.
The following packages have been upgraded to a later upstream version: openssh (8.0p1).
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.1 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
After installing this update, the OpenSSH server daemon (sshd) will be restarted automatically. 1686065 - SSH connections get closed when time-based rekeyring is used and ClientAliveMaxCount=0 1691045 - Rebase OpenSSH to latest release (8.0p1?) 1707485 - Use high-level API to do signatures 1712436 - MD5 is used when writing password protected PEM 1732424 - ssh-keygen -A fails in FIPS mode because of DSA key 1732449 - rsa-sha2-*-cert-v01@openssh.com host key types are ignored in FIPS despite being in the policy
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
aarch64: openssh-askpass-8.0p1-3.el8.aarch64.rpm openssh-askpass-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-cavs-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-clients-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-debugsource-8.0p1-3.el8.aarch64.rpm openssh-keycat-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-ldap-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-server-debuginfo-8.0p1-3.el8.aarch64.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.aarch64.rpm
ppc64le: openssh-askpass-8.0p1-3.el8.ppc64le.rpm openssh-askpass-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-cavs-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-clients-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-debugsource-8.0p1-3.el8.ppc64le.rpm openssh-keycat-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-ldap-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-server-debuginfo-8.0p1-3.el8.ppc64le.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.ppc64le.rpm
s390x: openssh-askpass-8.0p1-3.el8.s390x.rpm openssh-askpass-debuginfo-8.0p1-3.el8.s390x.rpm openssh-cavs-debuginfo-8.0p1-3.el8.s390x.rpm openssh-clients-debuginfo-8.0p1-3.el8.s390x.rpm openssh-debuginfo-8.0p1-3.el8.s390x.rpm openssh-debugsource-8.0p1-3.el8.s390x.rpm openssh-keycat-debuginfo-8.0p1-3.el8.s390x.rpm openssh-ldap-debuginfo-8.0p1-3.el8.s390x.rpm openssh-server-debuginfo-8.0p1-3.el8.s390x.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.s390x.rpm
x86_64: openssh-askpass-8.0p1-3.el8.x86_64.rpm openssh-askpass-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-cavs-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-clients-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-debugsource-8.0p1-3.el8.x86_64.rpm openssh-keycat-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-ldap-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-server-debuginfo-8.0p1-3.el8.x86_64.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 8):
Source: openssh-8.0p1-3.el8.src.rpm
aarch64: openssh-8.0p1-3.el8.aarch64.rpm openssh-askpass-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-cavs-8.0p1-3.el8.aarch64.rpm openssh-cavs-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-clients-8.0p1-3.el8.aarch64.rpm openssh-clients-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-debugsource-8.0p1-3.el8.aarch64.rpm openssh-keycat-8.0p1-3.el8.aarch64.rpm openssh-keycat-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-ldap-8.0p1-3.el8.aarch64.rpm openssh-ldap-debuginfo-8.0p1-3.el8.aarch64.rpm openssh-server-8.0p1-3.el8.aarch64.rpm openssh-server-debuginfo-8.0p1-3.el8.aarch64.rpm pam_ssh_agent_auth-0.10.3-7.3.el8.aarch64.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.aarch64.rpm
ppc64le: openssh-8.0p1-3.el8.ppc64le.rpm openssh-askpass-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-cavs-8.0p1-3.el8.ppc64le.rpm openssh-cavs-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-clients-8.0p1-3.el8.ppc64le.rpm openssh-clients-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-debugsource-8.0p1-3.el8.ppc64le.rpm openssh-keycat-8.0p1-3.el8.ppc64le.rpm openssh-keycat-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-ldap-8.0p1-3.el8.ppc64le.rpm openssh-ldap-debuginfo-8.0p1-3.el8.ppc64le.rpm openssh-server-8.0p1-3.el8.ppc64le.rpm openssh-server-debuginfo-8.0p1-3.el8.ppc64le.rpm pam_ssh_agent_auth-0.10.3-7.3.el8.ppc64le.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.ppc64le.rpm
s390x: openssh-8.0p1-3.el8.s390x.rpm openssh-askpass-debuginfo-8.0p1-3.el8.s390x.rpm openssh-cavs-8.0p1-3.el8.s390x.rpm openssh-cavs-debuginfo-8.0p1-3.el8.s390x.rpm openssh-clients-8.0p1-3.el8.s390x.rpm openssh-clients-debuginfo-8.0p1-3.el8.s390x.rpm openssh-debuginfo-8.0p1-3.el8.s390x.rpm openssh-debugsource-8.0p1-3.el8.s390x.rpm openssh-keycat-8.0p1-3.el8.s390x.rpm openssh-keycat-debuginfo-8.0p1-3.el8.s390x.rpm openssh-ldap-8.0p1-3.el8.s390x.rpm openssh-ldap-debuginfo-8.0p1-3.el8.s390x.rpm openssh-server-8.0p1-3.el8.s390x.rpm openssh-server-debuginfo-8.0p1-3.el8.s390x.rpm pam_ssh_agent_auth-0.10.3-7.3.el8.s390x.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.s390x.rpm
x86_64: openssh-8.0p1-3.el8.x86_64.rpm openssh-askpass-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-cavs-8.0p1-3.el8.x86_64.rpm openssh-cavs-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-clients-8.0p1-3.el8.x86_64.rpm openssh-clients-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-debugsource-8.0p1-3.el8.x86_64.rpm openssh-keycat-8.0p1-3.el8.x86_64.rpm openssh-keycat-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-ldap-8.0p1-3.el8.x86_64.rpm openssh-ldap-debuginfo-8.0p1-3.el8.x86_64.rpm openssh-server-8.0p1-3.el8.x86_64.rpm openssh-server-debuginfo-8.0p1-3.el8.x86_64.rpm pam_ssh_agent_auth-0.10.3-7.3.el8.x86_64.rpm pam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-20685 https://access.redhat.com/security/cve/CVE-2019-6109 https://access.redhat.com/security/cve/CVE-2019-6111 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.1_release_notes/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2019 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXcHzKNzjgjWX9erEAQiytQ/6Apphov2V0QmnXA+KO3ZZKBPXtgKv8Sv1 dPtXhTC+Keq4yX9/bXlIuyk6BUsMeaiIMlL5bSSKtq2I7rVxwubTcPX4rD+pQvx8 ArNJgn7U2/3xqwc0R8dNXx6o8vB1M6jXDtu8fKJOxW48evDJf6gE4gX2KUM9yxR2 MhCoHVkLp9a5f0T11yFPI11H0P8gXXQgboAkdt82Ui35T4tD8RndVyPCsllN2c/X QCCbvZ9e8OLJJoxsOryLcw8tpQHXK2AJMXWv0Us99kQtbaBULWWahhrg/tftLxtT pILFBaB/RsmGg1O6OkxJ2CuKl6ATC2Wlj/Z7uYPrS7MQDn+fXkH2gfcjb4Z4rqIL IyKbUpsyFEAaV5rJUeRaS7dGfuQldQbS96P8lUpCcOXPbYD8FgTrW2q3NjOKgYMU +gh2xPwmlRm+iYfmedPoR2+bTWNYv8JS+Cp/fZF4IFx2EJPQcxKLYshNKgcfkNkR rIZ4brUI79p84H01TcTh4mFAbR63Y+c36UAI3/fM/W/RkZn/PdoJtpfwg/tjOYZH rt9kL7SfAEhjHNtBuJGNol6e124srS6300hnfFovAr6llDOcYlrh3ZgVZjVrn6E8 TZhyZ84TGMOqykfH7B9XkJH82X+x3rd2m0ovCPq+Ly62BasdXVd0C2snzbx8OAM8 I+am8dhVlyM= =iPw4 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . scp client multiple vulnerabilities =================================== The latest version of this advisory is available at: https://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt
Overview
SCP clients from multiple vendors are susceptible to a malicious scp server performing unauthorized changes to target directory and/or client output manipulation.
Description
Many scp clients fail to verify if the objects returned by the scp server match those it asked for. This issue dates back to 1983 and rcp, on which scp is based. Finally, two vulnerabilities in clients may allow server to spoof the client output.
Details
The discovered vulnerabilities, described in more detail below, enables the attack described here in brief.
-
The attacker controlled server or Man-in-the-Middle(*) attack drops .bash_aliases file to victim's home directory when the victim performs scp operation from the server. The transfer of extra files is hidden by sending ANSI control sequences via stderr. For example:
user@local:~$ scp user@remote:readme.txt . readme.txt 100% 494 1.6KB/s 00:00 user@local:~$
-
Once the victim launches a new shell, the malicious commands in .bash_aliases get executed.
*) Man-in-the-Middle attack does require the victim to accept the wrong host fingerprint.
Vulnerabilities
-
CWE-20: scp client missing received object name validation [CVE-2019-6111]
Due to the scp implementation being derived from 1983 rcp [1], the server chooses which files/directories are sent to the client. However, scp client only perform cursory validation of the object name returned (only directory traversal attacks are prevented). A malicious scp server can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example overwrite .ssh/authorized_keys).
The same vulnerability in WinSCP is known as CVE-2018-20684.
- CWE-451: scp client spoofing via object name [CVE-2019-6109]
Due to missing character encoding in the progress display, the object name can be used to manipulate the client output, for example to employ ANSI codes to hide additional files being transferred.
- CWE-451: scp client spoofing via stderr [CVE-2019-6110]
Due to accepting and displaying arbitrary stderr output from the scp server, a malicious server can manipulate the client output, for example to employ ANSI codes to hide additional files being transferred.
Proof-of-Concept
Proof of concept malicious scp server will be released at a later date.
Vulnerable versions
The following software packages have some or all vulnerabilities:
ver #1 #2 #3 #4
OpenSSH scp <=7.9 x x x x PuTTY PSCP ? - - x x WinSCP scp mode <=5.13 - x - -
Tectia SSH scpg3 is not affected since it exclusively uses sftp protocol.
Mitigation
- OpenSSH
1.1 Switch to sftp if possible
1.2 Alternatively apply the following patch to harden scp against most server-side manipulation attempts: https://sintonen.fi/advisories/scp-name-validator.patch
NOTE: This patch may cause problems if the the remote and local shells don't
agree on the way glob() pattern matching works. YMMV.
- PuTTY
2.1 No fix is available yet
- WinSCP
3.1. Upgrade to WinSCP 5.14 or later
Similar or prior work
- CVE-2000-0992 - scp overwrites arbitrary files
References
- https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access
Credits
The vulnerability was discovered by Harry Sintonen / F-Secure Corporation.
Timeline
2018.08.08 initial discovery of vulnerabilities #1 and #2 2018.08.09 reported vulnerabilities #1 and #2 to OpenSSH 2018.08.10 OpenSSH acknowledged the vulnerabilities 2018.08.14 discovered & reported vulnerability #3 to OpenSSH 2018.08.15 discovered & reported vulnerability #4 to OpenSSH 2018.08.30 reported PSCP vulnerabilities (#3 and #4) to PuTTY developers 2018.08.31 reported WinSCP vulnerability (#2) to WinSCP developers 2018.09.04 WinSCP developers reported the vulnerability #2 fixed 2018.11.12 requested a status update from OpenSSH 2018.11.16 OpenSSH fixed vulnerability #1 2019.01.07 requested a status update from OpenSSH 2019.01.08 requested CVE assignments from MITRE 2019.01.10 received CVE assignments from MITRE 2019.01.11 public disclosure of the advisory 2019.01.14 added a warning about the potential issues caused by the patch
. All the vulnerabilities are in found in the scp client implementing the SCP protocol. The check added in this version can lead to regression if the client and the server have differences in wildcard expansion rules. If the server is trusted for that purpose, the check can be disabled with a new -T option to the scp client.
For the stable distribution (stretch), these problems have been fixed in version 1:7.4p1-10+deb9u5.
For the detailed security status of openssh please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssh
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQEzBAEBCgAdFiEE8vi34Qgfo83x35gF3rYcyPpXRFsFAlxe0w0ACgkQ3rYcyPpX RFs85AgA0GrSHO4Qf5FVsE3oXa+nMkZ4U6pbOA9dHotX54DEyNuIJrOsOv01cFxQ t2Z6uDkZptmHZT4uSWg2xIgMvpkGo9906ziZfHc0LTuHl8j++7cCDIDGZBm/iZaX ueQfl85gHDpte41JvUtpSBAwk1Bic7ltLUPDIGEiq6nQboxHIzsU7ULVb1l0wNxF sEFDPWGBS01HTa+QWgQaG/wbEhMRDcVz1Ck7dqpT2soQRohDWxU01j14q1EKe9O9 GHiWECvFSHBkkI/v8lNfSWnOWYa/+Aknri0CpjPc/bqh2Yx9rgp/Q5+FJ/FxJjmC bHFd+tbxB1LxEO96zKguYpPIzw7Kcw== =5Fd8 -----END PGP SIGNATURE-----
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201901-1500", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "solaris", "scope": "eq", "trust": 1.3, "vendor": "oracle", "version": "10" }, { "model": "enterprise linux server tus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "m10-4", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "enterprise linux server aus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.6" }, { "model": "winscp", "scope": "lte", "trust": 1.0, "vendor": "winscp", "version": "5.13" }, { "model": "steelstore cloud integrated storage", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise linux eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.6" }, { "model": "enterprise linux server aus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.2" }, { "model": "scalance x204rna eec", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.2.7" }, { "model": "m10-4s", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "enterprise linux eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.2" }, { "model": "m10-4", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "ontap select deploy", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssh", "scope": "lte", "trust": 1.0, "vendor": "openbsd", "version": "7.9" }, { "model": "m12-2", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise linux server tus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.6" }, { "model": "m10-1", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "m10-4s", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.10" }, { "model": "enterprise linux server tus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.2" }, { "model": "m12-2", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "scalance x204rna", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.2.7" }, { "model": "m12-2s", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "m12-1", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp2361" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "m10-1", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "8.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "16.04" }, { "model": "storage automation store", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise linux server aus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "enterprise linux eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.1" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.04" }, { "model": "enterprise linux eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "m12-2s", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "m12-1", "scope": "lt", "trust": 1.0, "vendor": "fujitsu", "version": "xcp3070" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "14.04" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "ubuntu", "scope": null, "trust": 0.8, "vendor": "canonical", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "element software", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "steelstore cloud integrated storage", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "storage automation store", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "openssh", "scope": "eq", "trust": 0.8, "vendor": "openbsd", "version": "7.9" }, { "model": "winscp", "scope": null, "trust": 0.8, "vendor": "winscp", "version": null }, { "model": "enterprise linux", "scope": "eq", "trust": 0.3, "vendor": "redhat", "version": "7" }, { "model": "openssh", "scope": "eq", "trust": 0.3, "vendor": "openssh", "version": "7.9" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "5.1" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "5.0" }, { "model": "traffix sdc", "scope": "eq", "trust": 0.3, "vendor": "f5", "version": "4.4" } ], "sources": [ { "db": "BID", "id": "106531" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "NVD", "id": "CVE-2018-20685" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openbsd:openssh:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.9", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:winscp:winscp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.13", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storage_automation_store:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:solaris:10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-4_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-4:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-4s_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-4s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-2_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-2s_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp2361", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-2s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-4_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-4:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-4s_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-4s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m10-4s_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m10-4s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-2_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fujitsu:m12-2s_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "xcp3070", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:fujitsu:m12-2s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_x204rna_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.2.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_x204rna:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_x204rna_eec_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.2.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_x204rna_eec:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2018-20685" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat,Harry Sintonen,Gentoo", "sources": [ { "db": "CNNVD", "id": "CNNVD-201901-347" } ], "trust": 0.6 }, "cve": "CVE-2018-20685", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "HIGH", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 2.6, "confidentialityImpact": "NONE", "exploitabilityScore": 4.9, "impactScore": 2.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "LOW", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:H/Au:N/C:N/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "High", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 2.6, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2018-20685", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Low", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:H/Au:N/C:N/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.6, "impactScore": 3.6, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:U/C:N/I:H/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.3, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2018-20685", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:N/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2018-20685", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-201901-347", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2018-20685", "trust": 0.1, "value": "LOW" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "NVD", "id": "CVE-2018-20685" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In OpenSSH 7.9, scp.c in the scp client allows remote SSH servers to bypass intended access restrictions via the filename of . or an empty filename. The impact is modifying the permissions of the target directory on the client side. OpenSSH Contains an access control vulnerability.Information may be tampered with. OpenSSH is prone to an access-bypass vulnerability. \nAn attacker can exploit this issue to bypass certain security restrictions and perform unauthorized actions; this may aid in launching further attacks. \nOpenSSH version 7.9 is vulnerable. ==========================================================================\nUbuntu Security Notice USN-3885-1\nFebruary 07, 2019\n\nopenssh vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.10\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 LTS\n- Ubuntu 14.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in OpenSSH. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.10:\n openssh-client 1:7.7p1-4ubuntu0.2\n\nUbuntu 18.04 LTS:\n openssh-client 1:7.6p1-4ubuntu0.2\n\nUbuntu 16.04 LTS:\n openssh-client 1:7.2p2-4ubuntu2.7\n\nUbuntu 14.04 LTS:\n openssh-client 1:6.6p1-2ubuntu2.12\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 201903-16\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: OpenSSH: Multiple vulnerabilities\n Date: March 20, 2019\n Bugs: #675520, #675522\n ID: 201903-16\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in OpenSSH, the worst of which\ncould allow a remote attacker to gain unauthorized access. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/openssh \u003c 7.9_p1-r4 \u003e= 7.9_p1-r4 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSH. Please review\nthe CVE identifiers referenced below for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSH users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/openssh-7.9_p1-r4\"\n\nReferences\n==========\n\n[ 1 ] CVE-2018-20685\n https://nvd.nist.gov/vuln/detail/CVE-2018-20685\n[ 2 ] CVE-2019-6109\n https://nvd.nist.gov/vuln/detail/CVE-2019-6109\n[ 3 ] CVE-2019-6110\n https://nvd.nist.gov/vuln/detail/CVE-2019-6110\n[ 4 ] CVE-2019-6111\n https://nvd.nist.gov/vuln/detail/CVE-2019-6111\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/201903-16\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2019 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: openssh security, bug fix, and enhancement update\nAdvisory ID: RHSA-2019:3702-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2019:3702\nIssue date: 2019-11-05\nCVE Names: CVE-2018-20685 CVE-2019-6109 CVE-2019-6111 \n=====================================================================\n\n1. Summary:\n\nAn update for openssh is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nOpenSSH is an SSH protocol implementation supported by a number of Linux,\nUNIX, and similar operating systems. It includes the core files necessary\nfor both the OpenSSH client and server. \n\nThe following packages have been upgraded to a later upstream version:\nopenssh (8.0p1). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.1 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing this update, the OpenSSH server daemon (sshd) will be\nrestarted automatically. \n1686065 - SSH connections get closed when time-based rekeyring is used and ClientAliveMaxCount=0\n1691045 - Rebase OpenSSH to latest release (8.0p1?)\n1707485 - Use high-level API to do signatures\n1712436 - MD5 is used when writing password protected PEM\n1732424 - ssh-keygen -A fails in FIPS mode because of DSA key\n1732449 - rsa-sha2-*-cert-v01@openssh.com host key types are ignored in FIPS despite being in the policy\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\naarch64:\nopenssh-askpass-8.0p1-3.el8.aarch64.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-debugsource-8.0p1-3.el8.aarch64.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.aarch64.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.aarch64.rpm\n\nppc64le:\nopenssh-askpass-8.0p1-3.el8.ppc64le.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-debugsource-8.0p1-3.el8.ppc64le.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.ppc64le.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.ppc64le.rpm\n\ns390x:\nopenssh-askpass-8.0p1-3.el8.s390x.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-debugsource-8.0p1-3.el8.s390x.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.s390x.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.s390x.rpm\n\nx86_64:\nopenssh-askpass-8.0p1-3.el8.x86_64.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-debugsource-8.0p1-3.el8.x86_64.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.x86_64.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nopenssh-8.0p1-3.el8.src.rpm\n\naarch64:\nopenssh-8.0p1-3.el8.aarch64.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-cavs-8.0p1-3.el8.aarch64.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-clients-8.0p1-3.el8.aarch64.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-debugsource-8.0p1-3.el8.aarch64.rpm\nopenssh-keycat-8.0p1-3.el8.aarch64.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-ldap-8.0p1-3.el8.aarch64.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.aarch64.rpm\nopenssh-server-8.0p1-3.el8.aarch64.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.aarch64.rpm\npam_ssh_agent_auth-0.10.3-7.3.el8.aarch64.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.aarch64.rpm\n\nppc64le:\nopenssh-8.0p1-3.el8.ppc64le.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-cavs-8.0p1-3.el8.ppc64le.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-clients-8.0p1-3.el8.ppc64le.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-debugsource-8.0p1-3.el8.ppc64le.rpm\nopenssh-keycat-8.0p1-3.el8.ppc64le.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-ldap-8.0p1-3.el8.ppc64le.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.ppc64le.rpm\nopenssh-server-8.0p1-3.el8.ppc64le.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.ppc64le.rpm\npam_ssh_agent_auth-0.10.3-7.3.el8.ppc64le.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.ppc64le.rpm\n\ns390x:\nopenssh-8.0p1-3.el8.s390x.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-cavs-8.0p1-3.el8.s390x.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-clients-8.0p1-3.el8.s390x.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-debugsource-8.0p1-3.el8.s390x.rpm\nopenssh-keycat-8.0p1-3.el8.s390x.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-ldap-8.0p1-3.el8.s390x.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.s390x.rpm\nopenssh-server-8.0p1-3.el8.s390x.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.s390x.rpm\npam_ssh_agent_auth-0.10.3-7.3.el8.s390x.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.s390x.rpm\n\nx86_64:\nopenssh-8.0p1-3.el8.x86_64.rpm\nopenssh-askpass-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-cavs-8.0p1-3.el8.x86_64.rpm\nopenssh-cavs-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-clients-8.0p1-3.el8.x86_64.rpm\nopenssh-clients-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-debugsource-8.0p1-3.el8.x86_64.rpm\nopenssh-keycat-8.0p1-3.el8.x86_64.rpm\nopenssh-keycat-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-ldap-8.0p1-3.el8.x86_64.rpm\nopenssh-ldap-debuginfo-8.0p1-3.el8.x86_64.rpm\nopenssh-server-8.0p1-3.el8.x86_64.rpm\nopenssh-server-debuginfo-8.0p1-3.el8.x86_64.rpm\npam_ssh_agent_auth-0.10.3-7.3.el8.x86_64.rpm\npam_ssh_agent_auth-debuginfo-0.10.3-7.3.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20685\nhttps://access.redhat.com/security/cve/CVE-2019-6109\nhttps://access.redhat.com/security/cve/CVE-2019-6111\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.1_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2019 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXcHzKNzjgjWX9erEAQiytQ/6Apphov2V0QmnXA+KO3ZZKBPXtgKv8Sv1\ndPtXhTC+Keq4yX9/bXlIuyk6BUsMeaiIMlL5bSSKtq2I7rVxwubTcPX4rD+pQvx8\nArNJgn7U2/3xqwc0R8dNXx6o8vB1M6jXDtu8fKJOxW48evDJf6gE4gX2KUM9yxR2\nMhCoHVkLp9a5f0T11yFPI11H0P8gXXQgboAkdt82Ui35T4tD8RndVyPCsllN2c/X\nQCCbvZ9e8OLJJoxsOryLcw8tpQHXK2AJMXWv0Us99kQtbaBULWWahhrg/tftLxtT\npILFBaB/RsmGg1O6OkxJ2CuKl6ATC2Wlj/Z7uYPrS7MQDn+fXkH2gfcjb4Z4rqIL\nIyKbUpsyFEAaV5rJUeRaS7dGfuQldQbS96P8lUpCcOXPbYD8FgTrW2q3NjOKgYMU\n+gh2xPwmlRm+iYfmedPoR2+bTWNYv8JS+Cp/fZF4IFx2EJPQcxKLYshNKgcfkNkR\nrIZ4brUI79p84H01TcTh4mFAbR63Y+c36UAI3/fM/W/RkZn/PdoJtpfwg/tjOYZH\nrt9kL7SfAEhjHNtBuJGNol6e124srS6300hnfFovAr6llDOcYlrh3ZgVZjVrn6E8\nTZhyZ84TGMOqykfH7B9XkJH82X+x3rd2m0ovCPq+Ly62BasdXVd0C2snzbx8OAM8\nI+am8dhVlyM=\n=iPw4\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. scp client multiple vulnerabilities\n===================================\nThe latest version of this advisory is available at:\nhttps://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt\n\n\nOverview\n--------\n\nSCP clients from multiple vendors are susceptible to a malicious scp server performing\nunauthorized changes to target directory and/or client output manipulation. \n\n\nDescription\n-----------\n\nMany scp clients fail to verify if the objects returned by the scp server match those\nit asked for. This issue dates back to 1983 and rcp, on which scp is based. \nFinally, two vulnerabilities in clients may allow server to spoof the client output. \n\n\nDetails\n-------\n\nThe discovered vulnerabilities, described in more detail below, enables the attack\ndescribed here in brief. \n\n1. The attacker controlled server or Man-in-the-Middle(*) attack drops .bash_aliases\n file to victim\u0027s home directory when the victim performs scp operation from the\n server. The transfer of extra files is hidden by sending ANSI control sequences\n via stderr. For example:\n\n user@local:~$ scp user@remote:readme.txt . \n readme.txt 100% 494 1.6KB/s 00:00\n user@local:~$\n\n2. Once the victim launches a new shell, the malicious commands in .bash_aliases get\n executed. \n\n\n*) Man-in-the-Middle attack does require the victim to accept the wrong host\n fingerprint. \n\n\nVulnerabilities\n---------------\n\n1. \n\n\n2. CWE-20: scp client missing received object name validation [CVE-2019-6111]\n\nDue to the scp implementation being derived from 1983 rcp [1], the server chooses which\nfiles/directories are sent to the client. However, scp client only perform cursory\nvalidation of the object name returned (only directory traversal attacks are prevented). \nA malicious scp server can overwrite arbitrary files in the scp client target directory. \nIf recursive operation (-r) is performed, the server can manipulate subdirectories\nas well (for example overwrite .ssh/authorized_keys). \n\nThe same vulnerability in WinSCP is known as CVE-2018-20684. \n\n\n3. CWE-451: scp client spoofing via object name [CVE-2019-6109]\n\nDue to missing character encoding in the progress display, the object name can be used\nto manipulate the client output, for example to employ ANSI codes to hide additional\nfiles being transferred. \n\n\n4. CWE-451: scp client spoofing via stderr [CVE-2019-6110]\n\nDue to accepting and displaying arbitrary stderr output from the scp server, a\nmalicious server can manipulate the client output, for example to employ ANSI codes\nto hide additional files being transferred. \n\n\nProof-of-Concept\n----------------\n\nProof of concept malicious scp server will be released at a later date. \n\n\nVulnerable versions\n-------------------\n\nThe following software packages have some or all vulnerabilities:\n\n ver #1 #2 #3 #4\nOpenSSH scp \u003c=7.9 x x x x\nPuTTY PSCP ? - - x x\nWinSCP scp mode \u003c=5.13 - x - -\n\nTectia SSH scpg3 is not affected since it exclusively uses sftp protocol. \n\n\nMitigation\n----------\n\n1. OpenSSH\n\n1.1 Switch to sftp if possible\n\n1.2 Alternatively apply the following patch to harden scp against most server-side\n manipulation attempts: https://sintonen.fi/advisories/scp-name-validator.patch\n\n NOTE: This patch may cause problems if the the remote and local shells don\u0027t\n agree on the way glob() pattern matching works. YMMV. \n\n2. PuTTY\n\n2.1 No fix is available yet\n\n3. WinSCP\n\n3.1. Upgrade to WinSCP 5.14 or later\n\n\n\nSimilar or prior work\n---------------------\n\n1. CVE-2000-0992 - scp overwrites arbitrary files\n\n\nReferences\n----------\n\n1. https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access\n\n\nCredits\n-------\n\nThe vulnerability was discovered by Harry Sintonen / F-Secure Corporation. \n\n\nTimeline\n--------\n\n2018.08.08 initial discovery of vulnerabilities #1 and #2\n2018.08.09 reported vulnerabilities #1 and #2 to OpenSSH\n2018.08.10 OpenSSH acknowledged the vulnerabilities\n2018.08.14 discovered \u0026 reported vulnerability #3 to OpenSSH\n2018.08.15 discovered \u0026 reported vulnerability #4 to OpenSSH\n2018.08.30 reported PSCP vulnerabilities (#3 and #4) to PuTTY developers\n2018.08.31 reported WinSCP vulnerability (#2) to WinSCP developers\n2018.09.04 WinSCP developers reported the vulnerability #2 fixed\n2018.11.12 requested a status update from OpenSSH\n2018.11.16 OpenSSH fixed vulnerability #1\n2019.01.07 requested a status update from OpenSSH\n2019.01.08 requested CVE assignments from MITRE\n2019.01.10 received CVE assignments from MITRE\n2019.01.11 public disclosure of the advisory\n2019.01.14 added a warning about the potential issues caused by the patch\n\n\n. All the vulnerabilities\nare in found in the scp client implementing the SCP protocol. \n The check added in this version can lead to regression if the client and\n the server have differences in wildcard expansion rules. If the server is\n trusted for that purpose, the check can be disabled with a new -T option to\n the scp client. \n\nFor the stable distribution (stretch), these problems have been fixed in\nversion 1:7.4p1-10+deb9u5. \n\nFor the detailed security status of openssh please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/openssh\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQEzBAEBCgAdFiEE8vi34Qgfo83x35gF3rYcyPpXRFsFAlxe0w0ACgkQ3rYcyPpX\nRFs85AgA0GrSHO4Qf5FVsE3oXa+nMkZ4U6pbOA9dHotX54DEyNuIJrOsOv01cFxQ\nt2Z6uDkZptmHZT4uSWg2xIgMvpkGo9906ziZfHc0LTuHl8j++7cCDIDGZBm/iZaX\nueQfl85gHDpte41JvUtpSBAwk1Bic7ltLUPDIGEiq6nQboxHIzsU7ULVb1l0wNxF\nsEFDPWGBS01HTa+QWgQaG/wbEhMRDcVz1Ck7dqpT2soQRohDWxU01j14q1EKe9O9\nGHiWECvFSHBkkI/v8lNfSWnOWYa/+Aknri0CpjPc/bqh2Yx9rgp/Q5+FJ/FxJjmC\nbHFd+tbxB1LxEO96zKguYpPIzw7Kcw==\n=5Fd8\n-----END PGP SIGNATURE-----\n", "sources": [ { "db": "NVD", "id": "CVE-2018-20685" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "BID", "id": "106531" }, { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "PACKETSTORM", "id": "151577" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "158639" }, { "db": "PACKETSTORM", "id": "155158" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "PACKETSTORM", "id": "151601" } ], "trust": 2.52 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2018-20685", "trust": 3.4 }, { "db": "BID", "id": "106531", "trust": 2.0 }, { "db": "SIEMENS", "id": "SSA-412672", "trust": 1.7 }, { "db": "JVNDB", "id": "JVNDB-2018-013957", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "152154", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "158639", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2020.1280.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.1410.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5087", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.1280", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0410.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.3795", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.1410", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.2671", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-201901-347", "trust": 0.6 }, { "db": "ICS CERT", "id": "ICSA-22-349-21", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2018-20685", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "151577", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "155158", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "151175", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "151601", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "BID", "id": "106531" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "PACKETSTORM", "id": "151577" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "158639" }, { "db": "PACKETSTORM", "id": "155158" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "PACKETSTORM", "id": "151601" }, { "db": "NVD", "id": "CVE-2018-20685" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "id": "VAR-201901-1500", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.6178670799999999 }, "last_update_date": "2023-12-18T11:43:08.750000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "DSA-4387", "trust": 0.8, "url": "https://www.debian.org/security/2019/dsa-4387" }, { "title": "upstream: disallow empty incoming filename or ones that refer to the current directory", "trust": 0.8, "url": "https://github.com/openssh/openssh-portable/commit/6010c0303a422a9c5fa8860c061bf7105eb7f8b2" }, { "title": "NTAP-20190215-0001", "trust": 0.8, "url": "https://security.netapp.com/advisory/ntap-20190215-0001/" }, { "title": "Diff for /src/usr.bin/ssh/scp.c between version 1.197 and 1.198", "trust": 0.8, "url": "https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/scp.c.diff?r1=1.197\u0026r2=1.198\u0026f=h" }, { "title": "USN-3885-1", "trust": 0.8, "url": "https://usn.ubuntu.com/3885-1/" }, { "title": "Top Page", "trust": 0.8, "url": "https://winscp.net/eng/index.php" }, { "title": "OpenSSH scp Repair measures for client security vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=88522" }, { "title": "The Register", "trust": 0.2, "url": "https://www.theregister.co.uk/2019/01/15/scp_vulnerability/" }, { "title": "Red Hat: Moderate: openssh security, bug fix, and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20193702 - security advisory" }, { "title": "Ubuntu Security Notice: openssh vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-3885-1" }, { "title": "Debian CVElist Bug Report Logs: openssh-client: scp can send arbitrary control characters / escape sequences to the terminal (CVE-2019-6109)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=dffe92fd93b8f745f5f15bc2f29dc935" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2018-20685" }, { "title": "Arch Linux Advisories: [ASA-201904-11] openssh: multiple issues", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-201904-11" }, { "title": "Debian CVElist Bug Report Logs: netkit-rsh: CVE-2019-7282 CVE-2019-7283", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=a043554ad34dcb6b0dc285dc8ea3ce0d" }, { "title": "Debian CVElist Bug Report Logs: CVE-2019-6111 not fixed, file transfer of unwanted files by malicious SSH server still possible", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=74b791ca4fdf54c27d2b50ef6845ef8e" }, { "title": "Debian CVElist Bug Report Logs: openssh: CVE-2018-20685: scp.c in the scp client allows remote SSH servers to bypass intended access restrictions", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=8394bb17731a99ef76b185cbc70acfa3" }, { "title": "Amazon Linux AMI: ALAS-2019-1313", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2019-1313" }, { "title": "Amazon Linux 2: ALAS2-2019-1216", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2019-1216" }, { "title": "IBM: IBM Security Bulletin: Vulnerabilities in OpenSSH affect AIX (CVE-2018-20685 CVE-2018-6109 CVE-2018-6110 CVE-2018-6111) Security Bulletin", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=50a54c2fb43b489f64442dcf4f25bc3b" }, { "title": "IBM: IBM Security Bulletin: Vyatta 5600 vRouter Software Patches \u2013 Releases 1801-w and 1801-y", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=bf3f2299a8658b7cd3984c40e7060666" }, { "title": "IBM: Security Bulletin: Multiple vulnerabilities affect IBM Cloud Object Storage Systems (February 2020v1)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=979e60202a29c3c55731e37f8ddc5a3b" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2018-20685 " }, { "title": "", "trust": 0.1, "url": "https://github.com/h4xrox/direct-admin-vulnerability-disclosure " }, { "title": "DC-4-Vulnhub-Walkthrough", "trust": 0.1, "url": "https://github.com/vshaliii/dc-4-vulnhub-walkthrough " }, { "title": "nmap", "trust": 0.1, "url": "https://github.com/devairdarolt/nmap " }, { "title": "github_aquasecurity_trivy", "trust": 0.1, "url": "https://github.com/back8/github_aquasecurity_trivy " }, { "title": "TrivyWeb", "trust": 0.1, "url": "https://github.com/korayagaya/trivyweb " }, { "title": "Funbox2-rookie", "trust": 0.1, "url": "https://github.com/vaishali1998/funbox2-rookie " }, { "title": "Vulnerability-Scanner-for-Containers", "trust": 0.1, "url": "https://github.com/t31m0/vulnerability-scanner-for-containers " }, { "title": "security", "trust": 0.1, "url": "https://github.com/umahari/security " }, { "title": "", "trust": 0.1, "url": "https://github.com/mohzeela/external-secret " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/simiyo/trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/aquasecurity/trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/knqyf263/trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/siddharthraopotukuchi/trivy " }, { "title": "Basic-Pentesting-2-Vulnhub-Walkthrough", "trust": 0.1, "url": "https://github.com/vshaliii/basic-pentesting-2-vulnhub-walkthrough " }, { "title": "Basic-Pentesting-2", "trust": 0.1, "url": "https://github.com/vshaliii/basic-pentesting-2 " } ], "sources": [ { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-863", "trust": 1.0 }, { "problemtype": "CWE-284", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "NVD", "id": "CVE-2018-20685" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 3.5, "url": "http://www.securityfocus.com/bid/106531" }, { "trust": 2.6, "url": "https://sintonen.fi/advisories/scp-client-multiple-vulnerabilities.txt" }, { "trust": 2.5, "url": "https://access.redhat.com/errata/rhsa-2019:3702" }, { "trust": 2.3, "url": "https://www.debian.org/security/2019/dsa-4387" }, { "trust": 2.0, "url": "https://github.com/openssh/openssh-portable/commit/6010c0303a422a9c5fa8860c061bf7105eb7f8b2" }, { "trust": 2.0, "url": "https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/scp.c.diff?r1=1.197\u0026r2=1.198\u0026f=h" }, { "trust": 2.0, "url": "https://www.oracle.com/technetwork/security-advisory/cpuapr2019-5072813.html" }, { "trust": 1.8, "url": "https://usn.ubuntu.com/3885-1/" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/201903-16" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202007-53" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20190215-0001/" }, { "trust": 1.7, "url": "https://lists.debian.org/debian-lts-announce/2019/03/msg00030.html" }, { "trust": 1.7, "url": "https://www.oracle.com/technetwork/security-advisory/cpuoct2019-5072832.html" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-412672.pdf" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20685" }, { "trust": 1.4, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2018-20685" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2018-20685" }, { "trust": 0.9, "url": "http://www.openssh.org/" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1665785" }, { "trust": 0.9, "url": "https://support.f5.com/csp/article/k11315080" }, { "trust": 0.8, "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2018-20685" }, { "trust": 0.6, "url": "http://www.ibm.com/support/docview.wss?uid=ibm10872060" }, { "trust": 0.6, "url": "https://www-01.ibm.com/support/docview.wss?uid=ibm10872060" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/75338" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.1280.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.2671/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/158639/gentoo-linux-security-advisory-202007-53.html" }, { "trust": 0.6, "url": "https://www-01.ibm.com/support/docview.wss?uid=ibm10882554" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/152154/gentoo-linux-security-advisory-201903-16.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.1410.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5087" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.1280/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.3795/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.1410/" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6111" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6109" }, { "trust": 0.2, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.2, "url": "https://security.gentoo.org/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6110" }, { "trust": 0.2, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/863.html" }, { "trust": 0.1, "url": "https://tools.cisco.com/security/center/viewalert.x?alertid=59473" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-349-21" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssh/1:6.6p1-2ubuntu2.12" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssh/1:7.2p2-4ubuntu2.7" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssh/1:7.7p1-4ubuntu0.2" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/usn/usn-3885-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssh/1:7.6p1-4ubuntu0.2" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-0739" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-12437" }, { "trust": 0.1, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.1_release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-6111" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-6109" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20684" }, { "trust": 0.1, "url": "https://sintonen.fi/advisories/scp-name-validator.patch" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2000-0992" }, { "trust": 0.1, "url": "https://www.jeffgeerling.com/blog/brief-history-ssh-and-remote-access" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/openssh" }, { "trust": 0.1, "url": "https://www.debian.org/security/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "BID", "id": "106531" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "PACKETSTORM", "id": "151577" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "158639" }, { "db": "PACKETSTORM", "id": "155158" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "PACKETSTORM", "id": "151601" }, { "db": "NVD", "id": "CVE-2018-20685" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2018-20685" }, { "db": "BID", "id": "106531" }, { "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "db": "PACKETSTORM", "id": "151577" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "PACKETSTORM", "id": "158639" }, { "db": "PACKETSTORM", "id": "155158" }, { "db": "PACKETSTORM", "id": "151175" }, { "db": "PACKETSTORM", "id": "151601" }, { "db": "NVD", "id": "CVE-2018-20685" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-01-10T00:00:00", "db": "VULMON", "id": "CVE-2018-20685" }, { "date": "2019-01-10T00:00:00", "db": "BID", "id": "106531" }, { "date": "2019-03-07T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "date": "2019-02-07T19:22:22", "db": "PACKETSTORM", "id": "151577" }, { "date": "2019-03-20T16:09:02", "db": "PACKETSTORM", "id": "152154" }, { "date": "2020-07-29T00:06:47", "db": "PACKETSTORM", "id": "158639" }, { "date": "2019-11-06T15:55:27", "db": "PACKETSTORM", "id": "155158" }, { "date": "2019-01-16T15:04:39", "db": "PACKETSTORM", "id": "151175" }, { "date": "2019-02-11T16:13:15", "db": "PACKETSTORM", "id": "151601" }, { "date": "2019-01-10T21:29:00.377000", "db": "NVD", "id": "CVE-2018-20685" }, { "date": "2019-01-11T00:00:00", "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-23T00:00:00", "db": "VULMON", "id": "CVE-2018-20685" }, { "date": "2019-04-18T12:00:00", "db": "BID", "id": "106531" }, { "date": "2019-03-07T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-013957" }, { "date": "2023-02-23T23:15:18.260000", "db": "NVD", "id": "CVE-2018-20685" }, { "date": "2022-12-14T00:00:00", "db": "CNNVD", "id": "CNNVD-201901-347" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "151577" }, { "db": "PACKETSTORM", "id": "152154" }, { "db": "CNNVD", "id": "CNNVD-201901-347" } ], "trust": 0.8 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSH Access control vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-013957" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "access control error", "sources": [ { "db": "CNNVD", "id": "CNNVD-201901-347" } ], "trust": 0.6 } }
var-202202-0906
Vulnerability from variot
valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes Advisory ID: RHSA-2022:1081-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2022:1081 Issue date: 2022-03-28 CVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3445 CVE-2021-3521 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-3999 CVE-2021-20231 CVE-2021-20232 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23177 CVE-2021-28153 CVE-2021-31566 CVE-2021-33560 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-43565 CVE-2022-23218 CVE-2022-23219 CVE-2022-23308 CVE-2022-23806 CVE-2022-24407 ==================================================================== 1. Summary:
Gatekeeper Operator v0.2
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters.
This advisory contains the container images for Gatekeeper that include security updates, and container upgrades.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Note: Gatekeeper support from the Red Hat support team is limited cases where it is integrated and used with Red Hat Advanced Cluster Management for Kubernetes. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security updates:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
installPlanApproval
set toAutomatic
. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApproval
tomanual
, then you must view each cluster to manually approve the upgrade to the operator.
- Upgrade gatekeeper operator:
The gatekeeper operator that is installed by the gatekeeper operator policy
has
-
- Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information.
- Bugs fixed (https://bugzilla.redhat.com/):
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
- References:
https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3712 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3999 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-42574 https://access.redhat.com/security/cve/CVE-2021-43565 https://access.redhat.com/security/cve/CVE-2022-23218 https://access.redhat.com/security/cve/CVE-2022-23219 https://access.redhat.com/security/cve/CVE-2022-23308 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate https://open-policy-agent.github.io/gatekeeper/website/docs/howto/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Security Fix(es):
- libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect. Package List:
Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security updates:
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
nodejs-shelljs: improper privilege management (CVE-2022-0144)
-
follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
Bug fix:
-
RHACM 2.3.8 images (Bugzilla #2062316)
-
Bugs fixed (https://bugzilla.redhat.com/):
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2062316 - RHACM 2.3.8 images
- Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-05-16-4 Security Update 2022-004 Catalina
Security Update 2022-004 Catalina addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213255.
apache Available for: macOS Catalina Impact: Multiple issues in apache Description: Multiple issues were addressed by updating apache to version 2.4.53. CVE-2021-44224 CVE-2021-44790 CVE-2022-22719 CVE-2022-22720 CVE-2022-22721
AppKit Available for: macOS Catalina Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team
AppleGraphicsControl Available for: macOS Catalina Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day Initiative
AppleScript Available for: macOS Catalina Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-26697: Qi Sun and Robert Ai of Trend Micro
AppleScript Available for: macOS Catalina Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-26698: Qi Sun of Trend Micro
CoreTypes Available for: macOS Catalina Impact: A malicious application may bypass Gatekeeper checks Description: This issue was addressed with improved checks to prevent unauthorized actions. CVE-2022-22663: Arsenii Kostromin (0x3c3e)
CVMS Available for: macOS Catalina Impact: A malicious application may be able to gain root privileges Description: A memory initialization issue was addressed. CVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori CVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori
DriverKit Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with system privileges Description: An out-of-bounds access issue was addressed with improved bounds checking. CVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)
Graphics Drivers Available for: macOS Catalina Impact: A local user may be able to read kernel memory Description: An out-of-bounds read issue existed that led to the disclosure of kernel memory. This was addressed with improved input validation. CVE-2022-22674: an anonymous researcher
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26720: Liu Long of Ant Security Light-Year Lab
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds read issue was addressed with improved input validation. CVE-2022-26770: Liu Long of Ant Security Light-Year Lab
Intel Graphics Driver Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26756: Jack Dates of RET2 Systems, Inc
Intel Graphics Driver Available for: macOS Catalina Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2022-26769: Antonio Zekic (@antoniozekic)
Intel Graphics Driver Available for: macOS Catalina Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro Zero Day Initiative
Kernel Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-26714: Peter Nguyễn Vũ Hoàng (@peternguyen14) of STAR Labs (@starlabs_sg)
Kernel Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-26757: Ned Williamson of Google Project Zero
libresolv Available for: macOS Catalina Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: An integer overflow was addressed with improved input validation. CVE-2022-26775: Max Shavrick (@_mxms) of the Google Security Team
LibreSSL Available for: macOS Catalina Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: A denial of service issue was addressed with improved input validation. CVE-2022-0778
libxml2 Available for: macOS Catalina Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2022-23308
OpenSSL Available for: macOS Catalina Impact: Processing a maliciously crafted certificate may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2022-0778
PackageKit Available for: macOS Catalina Impact: A malicious application may be able to modify protected parts of the file system Description: This issue was addressed with improved entitlements. CVE-2022-26727: Mickey Jin (@patch1t)
Printing Available for: macOS Catalina Impact: A malicious application may be able to bypass Privacy preferences Description: This issue was addressed by removing the vulnerable code. CVE-2022-26746: @gorelics
Security Available for: macOS Catalina Impact: A malicious app may be able to bypass signature validation Description: A certificate parsing issue was addressed with improved checks. CVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)
SMB Available for: macOS Catalina Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-26715: Peter Nguyễn Vũ Hoàng of STAR Labs
SoftwareUpdate Available for: macOS Catalina Impact: A malicious application may be able to access restricted files Description: This issue was addressed with improved entitlements. CVE-2022-26728: Mickey Jin (@patch1t)
TCC Available for: macOS Catalina Impact: An app may be able to capture a user's screen Description: This issue was addressed with improved checks. CVE-2022-26726: an anonymous researcher
Tcl Available for: macOS Catalina Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2022-26755: Arsenii Kostromin (0x3c3e)
WebKit Available for: macOS Catalina Impact: Processing a maliciously crafted mail message may lead to running arbitrary javascript Description: A validation issue was addressed with improved input sanitization. CVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu of Palo Alto Networks (paloaltonetworks.com)
Wi-Fi Available for: macOS Catalina Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2022-26761: Wang Yu of Cyberserval
zip Available for: macOS Catalina Impact: Processing a maliciously crafted file may lead to a denial of service Description: A denial of service issue was addressed with improved state handling. CVE-2022-0530
zlib Available for: macOS Catalina Impact: An attacker may be able to cause unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2018-25032: Tavis Ormandy
zsh Available for: macOS Catalina Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed by updating to zsh version 5.8.1. CVE-2021-45444
Additional recognition
PackageKit We would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for their assistance.
Security Update 2022-004 Catalina may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TYACgkQeC9qKD1p rhjgGRAAggg84uE4zYtBHmo5Qz45wlY/+FT7bSyCyo2Ta0m3JQmm26UiS9ZzXlD0 58jCo/ti+gH/gqwU05SnaG88pSMT6VKaDDnmw8WcrPtbl6NN6JX8vaZLFLoGO0dB rjwap7ulcLe7/HM8kCz3qqjKj4fusxckCjmm5yBMtuMklq7i51vzkT/+ws00ALcH 4S821CqIJlS2RIho/M/pih5A/H1Onw/nzKc7VOWjWMmmwoV+oiL4gMPE9kyIAJFQ NcZO7s70Qp9N5Z0VGIkD5HkAntEqYGNKJuCQUrHS0fHFUxVrQcuBbbSiv7vwnOT0 NVcFKBQWJtfcqmtcDF8mVi2ocqUh7So6AXhZGZtL3CrVfNMgTcjq6y5XwzXMgwlm ezMX73MnV91QuGp6KVZEmoFNlJ2dhKcJ0fYAhhW9DJqvJ1u5xIkQrUkK/ERLnWpE 9DIapT8uUbb9Zgez/tS9szv5jHhKtOoPbprju7d7LHw7XMFCVKbUvx745dFZx0AG PLsJZQNsQZJIK8QdcLA50KrlyjR2ts4nUsKj07I6LR4wUmcaj+goXYq4Nh4WLnoF x1AXD5ztdYlhqMcTAnuAbUYfuki0uzSy0p7wBiTknFwKMZNIaiToo64BES+7Iu1i vrB9SdtTSQCMXgPZX1Al1e2F/K2ubovrGU9geAEwLMq3AKudI4g= =JBHs -----END PGP SIGNATURE-----
. Summary:
The Migration Toolkit for Containers (MTC) 1.7.1 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2020725 - CVE-2021-41771 golang: debug/macho: invalid dynamic symbol table command can cause panic 2020736 - CVE-2021-41772 golang: archive/zip: Reader.Open panics on empty string 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2040378 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [backend] 2057516 - [MTC UI] UI should not allow PVC mapping for Full migration 2060244 - [MTC] DIM registry route need to be exposed to create inter-cluster state migration plans 2060717 - [MTC] Registry pod goes in CrashLoopBackOff several times when MCG Nooba is used as the Replication Repository 2061347 - [MTC] Log reader pod is missing velero and restic pod logs. 2061653 - [MTC UI] Migration Resources section showing pods from other namespaces 2062682 - [MTC] Destination storage class non-availability warning visible in Intra-cluster source to source state-migration migplan. 2065837 - controller_config.yml.j2 merge type should be set to merge (currently using the default strategic) 2071000 - Storage Conversion: UI doesn't have the ability to skip PVC 2072036 - Migration plan for storage conversion cannot be created if there's no replication repository 2072186 - Wrong migration type description 2072684 - Storage Conversion: PersistentVolumeClaimTemplates in StatefulSets are not updated automatically after migration 2073496 - Errors in rsync pod creation are not printed in the controller logs 2079814 - [MTC UI] Intra-cluster state migration plan showing a warning on PersistentVolumes page
- ========================================================================== Ubuntu Security Notice USN-5422-1 May 16, 2022
libxml2 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in libxml2. This issue only affected Ubuntu 14.04 ESM, and Ubuntu 16.04 ESM. (CVE-2022-23308)
It was discovered that libxml2 incorrectly handled certain XML files. An attacker could possibly use this issue to cause a crash or execute arbitrary code. (CVE-2022-29824)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libxml2 2.9.13+dfsg-1ubuntu0.1 libxml2-utils 2.9.13+dfsg-1ubuntu0.1
Ubuntu 21.10: libxml2 2.9.12+dfsg-4ubuntu0.2 libxml2-utils 2.9.12+dfsg-4ubuntu0.2
Ubuntu 20.04 LTS: libxml2 2.9.10+dfsg-5ubuntu0.20.04.3 libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.3
Ubuntu 18.04 LTS: libxml2 2.9.4+dfsg1-6.1ubuntu1.6 libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.6
Ubuntu 16.04 ESM: libxml2 2.9.3+dfsg1-1ubuntu0.7+esm2 libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm2
Ubuntu 14.04 ESM: libxml2 2.9.1+dfsg1-3ubuntu4.13+esm3 libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm3
In general, a standard system update will make all the necessary changes. Apple is aware of a report that this issue may have been actively exploited. CVE-2022-26724: Jorge A. CVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)
LaunchServices Available for: Apple TV 4K, Apple TV 4K (2nd generation), and Apple TV HD Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: An access issue was addressed with additional sandbox restrictions on third-party applications.
Apple TV will periodically check for software updates
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202202-0906", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "tvos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.5" }, { "model": "communications cloud native core network function cloud native environment", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.0" }, { "model": "h700e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications cloud native core network repository function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.2.0" }, { "model": "manageability software development kit", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.5" }, { "model": "mac os x", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "10.15.0" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.6.0" }, { "model": "watchos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "8.6" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.2.0" }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.5" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "snapmanager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.6.6" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0" }, { "model": "clustered data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mac os x", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "10.15.7" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "bootstrap os", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "snapdrive", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications cloud native core unified data repository", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.2.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "libxml2", "scope": "lt", "trust": 1.0, "vendor": "xmlsoft", "version": "2.9.13" }, { "model": "mac os x", "scope": "eq", "trust": 1.0, "vendor": "apple", "version": "10.15.7" }, { "model": "communications cloud native core network repository function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.2" }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.29" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.4" }, { "model": "communications cloud native core network slice selection function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.1" }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2022-23308" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.9.13", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-002:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-003:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-004:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-005:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-006:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-008:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-007:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.15.7", "versionStartIncluding": "10.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-003:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.6", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.4", "versionStartIncluding": "12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.6.6", "versionStartIncluding": "11.6.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:snapdrive:-:*:*:*:*:unix:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:oracle:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:manageability_software_development_kit:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_function_cloud_native_environment:22.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_repository_function:22.1.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_unified_data_repository:22.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_slice_selection_function:22.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.29", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-23308" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "166489" }, { "db": "PACKETSTORM", "id": "166327" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166976" } ], "trust": 0.4 }, "cve": "CVE-2022-23308", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-412332", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-23308", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202202-1722", "trust": 0.6, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-412332", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-412332" }, { "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "db": "NVD", "id": "CVE-2022-23308" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes\nAdvisory ID: RHSA-2022:1081-01\nProduct: Red Hat ACM\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:1081\nIssue date: 2022-03-28\nCVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-12762\n CVE-2020-13435 CVE-2020-14155 CVE-2020-16135\n CVE-2020-24370 CVE-2021-3200 CVE-2021-3445\n CVE-2021-3521 CVE-2021-3580 CVE-2021-3712\n CVE-2021-3800 CVE-2021-3999 CVE-2021-20231\n CVE-2021-20232 CVE-2021-22876 CVE-2021-22898\n CVE-2021-22925 CVE-2021-23177 CVE-2021-28153\n CVE-2021-31566 CVE-2021-33560 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-42574 CVE-2021-43565 CVE-2022-23218\n CVE-2022-23219 CVE-2022-23308 CVE-2022-23806\n CVE-2022-24407\n====================================================================\n1. Summary:\n\nGatekeeper Operator v0.2\n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. \n\nThis advisory contains the container images for Gatekeeper that include\nsecurity updates, and container upgrades. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\nNote: Gatekeeper support from the Red Hat support team is limited cases\nwhere it is integrated and used with Red Hat Advanced Cluster Management\nfor Kubernetes. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity updates:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n- - Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n- - Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3712\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3999\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-42574\nhttps://access.redhat.com/security/cve/CVE-2021-43565\nhttps://access.redhat.com/security/cve/CVE-2022-23218\nhttps://access.redhat.com/security/cve/CVE-2022-23219\nhttps://access.redhat.com/security/cve/CVE-2022-23308\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 8. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: Use-after-free of ID and IDREF attributes (CVE-2022-23308)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. Package List:\n\nRed Hat Enterprise Linux AppStream (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* nodejs-shelljs: improper privilege management (CVE-2022-0144)\n\n* follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\nBug fix:\n\n* RHACM 2.3.8 images (Bugzilla #2062316)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2062316 - RHACM 2.3.8 images\n\n5. Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-05-16-4 Security Update 2022-004 Catalina\n\nSecurity Update 2022-004 Catalina addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213255. \n\napache\nAvailable for: macOS Catalina\nImpact: Multiple issues in apache\nDescription: Multiple issues were addressed by updating apache to\nversion 2.4.53. \nCVE-2021-44224\nCVE-2021-44790\nCVE-2022-22719\nCVE-2022-22720\nCVE-2022-22721\n\nAppKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleGraphicsControl\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26751: Michael DePlante (@izobashi) of Trend Micro Zero Day\nInitiative\n\nAppleScript\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-26697: Qi Sun and Robert Ai of Trend Micro\n\nAppleScript\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-26698: Qi Sun of Trend Micro\n\nCoreTypes\nAvailable for: macOS Catalina\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: This issue was addressed with improved checks to prevent\nunauthorized actions. \nCVE-2022-22663: Arsenii Kostromin (0x3c3e)\n\nCVMS\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to gain root privileges\nDescription: A memory initialization issue was addressed. \nCVE-2022-26721: Yonghwi Jin (@jinmo123) of Theori\nCVE-2022-26722: Yonghwi Jin (@jinmo123) of Theori\n\nDriverKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges\nDescription: An out-of-bounds access issue was addressed with\nimproved bounds checking. \nCVE-2022-26763: Linus Henze of Pinauten GmbH (pinauten.de)\n\nGraphics Drivers\nAvailable for: macOS Catalina\nImpact: A local user may be able to read kernel memory\nDescription: An out-of-bounds read issue existed that led to the\ndisclosure of kernel memory. This was addressed with improved input\nvalidation. \nCVE-2022-22674: an anonymous researcher\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26720: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds read issue was addressed with improved\ninput validation. \nCVE-2022-26770: Liu Long of Ant Security Light-Year Lab\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26756: Jack Dates of RET2 Systems, Inc\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-26769: Antonio Zekic (@antoniozekic)\n\nIntel Graphics Driver\nAvailable for: macOS Catalina\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-26748: Jeonghoon Shin of Theori working with Trend Micro\nZero Day Initiative\n\nKernel\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-26714: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng (@peternguyen14) of STAR Labs\n(@starlabs_sg)\n\nKernel\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-26757: Ned Williamson of Google Project Zero\n\nlibresolv\nAvailable for: macOS Catalina\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2022-26775: Max Shavrick (@_mxms) of the Google Security Team\n\nLibreSSL\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: A denial of service issue was addressed with improved\ninput validation. \nCVE-2022-0778\n\nlibxml2\nAvailable for: macOS Catalina\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-23308\n\nOpenSSL\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted certificate may lead to a\ndenial of service\nDescription: This issue was addressed with improved checks. \nCVE-2022-0778\n\nPackageKit\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-26727: Mickey Jin (@patch1t)\n\nPrinting\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to bypass Privacy\npreferences\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2022-26746: @gorelics\n\nSecurity\nAvailable for: macOS Catalina\nImpact: A malicious app may be able to bypass signature validation\nDescription: A certificate parsing issue was addressed with improved\nchecks. \nCVE-2022-26766: Linus Henze of Pinauten GmbH (pinauten.de)\n\nSMB\nAvailable for: macOS Catalina\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-26715: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng of STAR Labs\n\nSoftwareUpdate\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to access restricted\nfiles\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-26728: Mickey Jin (@patch1t)\n\nTCC\nAvailable for: macOS Catalina\nImpact: An app may be able to capture a user\u0027s screen\nDescription: This issue was addressed with improved checks. \nCVE-2022-26726: an anonymous researcher\n\nTcl\nAvailable for: macOS Catalina\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2022-26755: Arsenii Kostromin (0x3c3e)\n\nWebKit\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted mail message may lead to\nrunning arbitrary javascript\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu\nof Palo Alto Networks (paloaltonetworks.com)\n\nWi-Fi\nAvailable for: macOS Catalina\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2022-26761: Wang Yu of Cyberserval\n\nzip\nAvailable for: macOS Catalina\nImpact: Processing a maliciously crafted file may lead to a denial of\nservice\nDescription: A denial of service issue was addressed with improved\nstate handling. \nCVE-2022-0530\n\nzlib\nAvailable for: macOS Catalina\nImpact: An attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2018-25032: Tavis Ormandy\n\nzsh\nAvailable for: macOS Catalina\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed by updating to zsh version\n5.8.1. \nCVE-2021-45444\n\nAdditional recognition\n\nPackageKit\nWe would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for\ntheir assistance. \n\nSecurity Update 2022-004 Catalina may be obtained from the Mac App\nStore or Apple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmKC1TYACgkQeC9qKD1p\nrhjgGRAAggg84uE4zYtBHmo5Qz45wlY/+FT7bSyCyo2Ta0m3JQmm26UiS9ZzXlD0\n58jCo/ti+gH/gqwU05SnaG88pSMT6VKaDDnmw8WcrPtbl6NN6JX8vaZLFLoGO0dB\nrjwap7ulcLe7/HM8kCz3qqjKj4fusxckCjmm5yBMtuMklq7i51vzkT/+ws00ALcH\n4S821CqIJlS2RIho/M/pih5A/H1Onw/nzKc7VOWjWMmmwoV+oiL4gMPE9kyIAJFQ\nNcZO7s70Qp9N5Z0VGIkD5HkAntEqYGNKJuCQUrHS0fHFUxVrQcuBbbSiv7vwnOT0\nNVcFKBQWJtfcqmtcDF8mVi2ocqUh7So6AXhZGZtL3CrVfNMgTcjq6y5XwzXMgwlm\nezMX73MnV91QuGp6KVZEmoFNlJ2dhKcJ0fYAhhW9DJqvJ1u5xIkQrUkK/ERLnWpE\n9DIapT8uUbb9Zgez/tS9szv5jHhKtOoPbprju7d7LHw7XMFCVKbUvx745dFZx0AG\nPLsJZQNsQZJIK8QdcLA50KrlyjR2ts4nUsKj07I6LR4wUmcaj+goXYq4Nh4WLnoF\nx1AXD5ztdYlhqMcTAnuAbUYfuki0uzSy0p7wBiTknFwKMZNIaiToo64BES+7Iu1i\nvrB9SdtTSQCMXgPZX1Al1e2F/K2ubovrGU9geAEwLMq3AKudI4g=\n=JBHs\n-----END PGP SIGNATURE-----\n\n\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.1 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2020725 - CVE-2021-41771 golang: debug/macho: invalid dynamic symbol table command can cause panic\n2020736 - CVE-2021-41772 golang: archive/zip: Reader.Open panics on empty string\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040378 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [backend]\n2057516 - [MTC UI] UI should not allow PVC mapping for Full migration\n2060244 - [MTC] DIM registry route need to be exposed to create inter-cluster state migration plans\n2060717 - [MTC] Registry pod goes in CrashLoopBackOff several times when MCG Nooba is used as the Replication Repository\n2061347 - [MTC] Log reader pod is missing velero and restic pod logs. \n2061653 - [MTC UI] Migration Resources section showing pods from other namespaces\n2062682 - [MTC] Destination storage class non-availability warning visible in Intra-cluster source to source state-migration migplan. \n2065837 - controller_config.yml.j2 merge type should be set to merge (currently using the default strategic)\n2071000 - Storage Conversion: UI doesn\u0027t have the ability to skip PVC\n2072036 - Migration plan for storage conversion cannot be created if there\u0027s no replication repository\n2072186 - Wrong migration type description\n2072684 - Storage Conversion: PersistentVolumeClaimTemplates in StatefulSets are not updated automatically after migration\n2073496 - Errors in rsync pod creation are not printed in the controller logs\n2079814 - [MTC UI] Intra-cluster state migration plan showing a warning on PersistentVolumes page\n\n5. ==========================================================================\nUbuntu Security Notice USN-5422-1\nMay 16, 2022\n\nlibxml2 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in libxml2. This issue only\naffected Ubuntu 14.04 ESM, and Ubuntu 16.04 ESM. (CVE-2022-23308)\n\nIt was discovered that libxml2 incorrectly handled certain XML files. \nAn attacker could possibly use this issue to cause a crash or execute\narbitrary code. (CVE-2022-29824)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libxml2 2.9.13+dfsg-1ubuntu0.1\n libxml2-utils 2.9.13+dfsg-1ubuntu0.1\n\nUbuntu 21.10:\n libxml2 2.9.12+dfsg-4ubuntu0.2\n libxml2-utils 2.9.12+dfsg-4ubuntu0.2\n\nUbuntu 20.04 LTS:\n libxml2 2.9.10+dfsg-5ubuntu0.20.04.3\n libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.3\n\nUbuntu 18.04 LTS:\n libxml2 2.9.4+dfsg1-6.1ubuntu1.6\n libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.6\n\nUbuntu 16.04 ESM:\n libxml2 2.9.3+dfsg1-1ubuntu0.7+esm2\n libxml2-utils 2.9.3+dfsg1-1ubuntu0.7+esm2\n\nUbuntu 14.04 ESM:\n libxml2 2.9.1+dfsg1-3ubuntu4.13+esm3\n libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm3\n\nIn general, a standard system update will make all the necessary changes. Apple is aware of a report that this issue may\nhave been actively exploited. \nCVE-2022-26724: Jorge A. \nCVE-2022-26765: Linus Henze of Pinauten GmbH (pinauten.de)\n\nLaunchServices\nAvailable for: Apple TV 4K, Apple TV 4K (2nd generation), and Apple\nTV HD\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: An access issue was addressed with additional sandbox\nrestrictions on third-party applications. \n\nApple TV will periodically check for software updates", "sources": [ { "db": "NVD", "id": "CVE-2022-23308" }, { "db": "VULHUB", "id": "VHN-412332" }, { "db": "PACKETSTORM", "id": "166489" }, { "db": "PACKETSTORM", "id": "166327" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "167193" }, { "db": "PACKETSTORM", "id": "167189" }, { "db": "PACKETSTORM", "id": "166976" }, { "db": "PACKETSTORM", "id": "167184" }, { "db": "PACKETSTORM", "id": "167194" } ], "trust": 1.71 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-412332", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-412332" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-23308", "trust": 2.5 }, { "db": "PACKETSTORM", "id": "167194", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "166327", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "167008", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166437", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168719", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166304", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.2569", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1263", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.3732", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1677", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0927", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1051", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2411", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4099", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1073", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5782", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3672", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "166803", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022051708", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031503", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022051713", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022042138", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072710", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072053", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022032843", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072640", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022041523", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022051839", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022051326", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022030110", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031620", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031525", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022032445", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022053128", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202202-1722", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "167189", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167184", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167193", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "166431", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166433", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167188", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167185", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167186", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-412332", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166489", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166516", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166976", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-412332" }, { "db": "PACKETSTORM", "id": "166489" }, { "db": "PACKETSTORM", "id": "166327" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "167193" }, { "db": "PACKETSTORM", "id": "167189" }, { "db": "PACKETSTORM", "id": "166976" }, { "db": "PACKETSTORM", "id": "167184" }, { "db": "PACKETSTORM", "id": "167194" }, { "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "db": "NVD", "id": "CVE-2022-23308" } ] }, "id": "VAR-202202-0906", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-412332" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T19:35:48.751000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "libxml2 Remediation of resource management error vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=184325" } ], "sources": [ { "db": "CNNVD", "id": "CNNVD-202202-1722" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-416", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-412332" }, { "db": "NVD", "id": "CVE-2022-23308" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.7, "url": "https://github.com/gnome/libxml2/commit/652dd12a858989b14eed4e84e453059cd3ba340e" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20220331-0008/" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213253" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213254" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213255" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213256" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213257" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213258" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/34" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/38" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/35" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/33" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/36" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/may/37" }, { "trust": 1.7, "url": "https://security.gentoo.org/glsa/202210-03" }, { "trust": 1.7, "url": "https://gitlab.gnome.org/gnome/libxml2/-/blob/v2.9.13/news" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.7, "url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00004.html" }, { "trust": 1.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23308" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2022-23308" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/la3mwwayzadwj5f6joubx65uzamqb7rf/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022051713" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2569" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072710" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022051839" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1051" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1073" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072053" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4099" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5782" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166803/red-hat-security-advisory-2022-1390-01.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libxml2-five-vulnerabilities-37614" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022032843" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166304/ubuntu-security-notice-usn-5324-1.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022053128" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167194/apple-security-advisory-2022-05-16-6.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2411" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022032445" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022051326" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-23308/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1263" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072640" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022051708" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.3732" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022042138" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022041523" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168719/gentoo-linux-security-advisory-202210-03.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022030110" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0927" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht213254" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3672" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031503" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031525" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167008/red-hat-security-advisory-2022-1747-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166327/red-hat-security-advisory-2022-0899-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166437/red-hat-security-advisory-2022-1039-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031620" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1677" }, { "trust": 0.4, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-23219" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3999" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-23218" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26714" }, { "trust": 0.3, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.3, "url": "https://support.apple.com/en-us/ht201222." }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0261" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25315" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22824" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22823" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22822" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22827" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-46143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0392" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22825" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25235" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-45960" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22826" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0318" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0413" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0359" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25236" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26726" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26766" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26702" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26764" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26745" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26765" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26757" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22675" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26706" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26763" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26711" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26768" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1081" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.1, "url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9." }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.1, "url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/." }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1083" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26771" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht204641" }, { "trust": 0.1, "url": "https://support.apple.com/ht213253." }, { "trust": 0.1, "url": "https://support.apple.com/downloads/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721" }, { "trust": 0.1, "url": "https://support.apple.com/ht213255." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22663" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44790" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22674" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0530" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44224" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22719" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26727" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26728" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26697" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26748" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26721" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45444" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26720" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22720" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22665" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26715" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26722" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26746" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22826" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25636" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4028" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/migration_toolkit_for_containers/mtc-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1734" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.4+dfsg1-6.1ubuntu1.6" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5422-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.10+dfsg-5ubuntu0.20.04.3" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.12+dfsg-4ubuntu0.2" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.13+dfsg-1ubuntu0.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26701" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26738" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26740" }, { "trust": 0.1, "url": "https://support.apple.com/ht213254." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26736" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26737" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26739" } ], "sources": [ { "db": "VULHUB", "id": "VHN-412332" }, { "db": "PACKETSTORM", "id": "166489" }, { "db": "PACKETSTORM", "id": "166327" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "167193" }, { "db": "PACKETSTORM", "id": "167189" }, { "db": "PACKETSTORM", "id": "166976" }, { "db": "PACKETSTORM", "id": "167184" }, { "db": "PACKETSTORM", "id": "167194" }, { "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "db": "NVD", "id": "CVE-2022-23308" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-412332" }, { "db": "PACKETSTORM", "id": "166489" }, { "db": "PACKETSTORM", "id": "166327" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "167193" }, { "db": "PACKETSTORM", "id": "167189" }, { "db": "PACKETSTORM", "id": "166976" }, { "db": "PACKETSTORM", "id": "167184" }, { "db": "PACKETSTORM", "id": "167194" }, { "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "db": "NVD", "id": "CVE-2022-23308" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-02-26T00:00:00", "db": "VULHUB", "id": "VHN-412332" }, { "date": "2022-03-28T15:52:16", "db": "PACKETSTORM", "id": "166489" }, { "date": "2022-03-16T16:44:24", "db": "PACKETSTORM", "id": "166327" }, { "date": "2022-03-29T15:53:19", "db": "PACKETSTORM", "id": "166516" }, { "date": "2022-05-17T17:06:32", "db": "PACKETSTORM", "id": "167193" }, { "date": "2022-05-17T16:59:55", "db": "PACKETSTORM", "id": "167189" }, { "date": "2022-05-05T17:35:22", "db": "PACKETSTORM", "id": "166976" }, { "date": "2022-05-17T16:57:29", "db": "PACKETSTORM", "id": "167184" }, { "date": "2022-05-17T17:06:48", "db": "PACKETSTORM", "id": "167194" }, { "date": "2022-02-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "date": "2022-02-26T05:15:08.280000", "db": "NVD", "id": "CVE-2022-23308" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-02T00:00:00", "db": "VULHUB", "id": "VHN-412332" }, { "date": "2023-06-30T00:00:00", "db": "CNNVD", "id": "CNNVD-202202-1722" }, { "date": "2023-11-07T03:44:08.253000", "db": "NVD", "id": "CVE-2022-23308" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202202-1722" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libxml2 Resource Management Error Vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202202-1722" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "resource management error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202202-1722" } ], "trust": 0.6 } }
var-202210-0997
Vulnerability from variot
An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. xmlsoft.org of libxml2 Products from other vendors contain integer overflow vulnerabilities.Service operation interruption (DoS) It may be in a state. libxml2 is an open source library for parsing XML documents. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements.
CVE-2022-40304
Ned Williamson and Nathan Wachholz discovered a vulnerability when
handling detection of entity reference cycles, which may result in
corrupted dictionary entries. This flaw may lead to logic errors,
including memory errors like double free flaws.
For the stable distribution (bullseye), these problems have been fixed in version 2.9.10+dfsg-6.7+deb11u3.
We recommend that you upgrade your libxml2 packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-39
https://security.gentoo.org/
Severity: High Title: libxml2: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #877149 ID: 202210-39
Synopsis
Multiple vulnerabilities have been found in libxml2, the worst of which could result in arbitrary code execution.
Background
libxml2 is the XML C parser and toolkit developed for the GNOME project.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/libxml2 < 2.10.3 >= 2.10.3
Description
Multiple vulnerabilities have been discovered in libxml2. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All libxml2 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/libxml2-2.10.3"
References
[ 1 ] CVE-2022-40303 https://nvd.nist.gov/vuln/detail/CVE-2022-40303 [ 2 ] CVE-2022-40304 https://nvd.nist.gov/vuln/detail/CVE-2022-40304
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-39
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . Description:
Version 1.27.0 of the OpenShift Serverless Operator is supported on Red Hat OpenShift Container Platform versions 4.8, 4.9, 4.10, 4.11 and 4.12.
This release includes security and bug fixes, and enhancements. Bugs fixed (https://bugzilla.redhat.com/):
2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3397 - [Developer Console] "parse error" when testing with normal user
LOG-3441 - [Administrator Console] Seeing "parse error" while using Severity filter for cluster view user
LOG-3463 - [release-5.6] ElasticsearchError error="400 - Rejected by Elasticsearch" when adding some labels in application namespaces
LOG-3477 - [Logging 5.6.0]CLF raises 'invalid: unrecognized outputs: [default]' after adding default
to outputRefs.
LOG-3494 - [release-5.6] After querying logs in loki, compactor pod raises many TLS handshake error if retention policy is enabled.
LOG-3496 - [release-5.6] LokiStack status is still 'Pending' when all loki components are running
LOG-3510 - [release-5.6] TLS errors on Loki controller pod due to bad certificate
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift API for Data Protection (OADP) 1.1.2 security and bug fix update Advisory ID: RHSA-2023:1174-01 Product: OpenShift API for Data Protection Advisory URL: https://access.redhat.com/errata/RHSA-2023:1174 Issue date: 2023-03-09 CVE Names: CVE-2021-46848 CVE-2022-1122 CVE-2022-1304 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2879 CVE-2022-2880 CVE-2022-2953 CVE-2022-4415 CVE-2022-4883 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-25308 CVE-2022-25309 CVE-2022-25310 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-30293 CVE-2022-35737 CVE-2022-40303 CVE-2022-40304 CVE-2022-41715 CVE-2022-41717 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2022-44617 CVE-2022-46285 CVE-2022-47629 CVE-2022-48303 =====================================================================
- Summary:
OpenShift API for Data Protection (OADP) 1.1.2 is now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes.
Security Fix(es) from Bugzilla:
-
golang: archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879)
-
golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters (CVE-2022-2880)
-
golang: regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)
-
golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
OADP-1056 - DPA fails validation if multiple BSLs have the same provider OADP-1150 - Handle docker env config changes in the oadp-operator OADP-1217 - update velero + restic to 1.9.5 OADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed OADP-1289 - Restore partially fails with error "Secrets \"deployer-token-rrjqx\" not found" OADP-290 - Remove creation/usage of velero-privileged SCC
- References:
https://access.redhat.com/security/cve/CVE-2021-46848 https://access.redhat.com/security/cve/CVE-2022-1122 https://access.redhat.com/security/cve/CVE-2022-1304 https://access.redhat.com/security/cve/CVE-2022-2056 https://access.redhat.com/security/cve/CVE-2022-2057 https://access.redhat.com/security/cve/CVE-2022-2058 https://access.redhat.com/security/cve/CVE-2022-2519 https://access.redhat.com/security/cve/CVE-2022-2520 https://access.redhat.com/security/cve/CVE-2022-2521 https://access.redhat.com/security/cve/CVE-2022-2867 https://access.redhat.com/security/cve/CVE-2022-2868 https://access.redhat.com/security/cve/CVE-2022-2869 https://access.redhat.com/security/cve/CVE-2022-2879 https://access.redhat.com/security/cve/CVE-2022-2880 https://access.redhat.com/security/cve/CVE-2022-2953 https://access.redhat.com/security/cve/CVE-2022-4415 https://access.redhat.com/security/cve/CVE-2022-4883 https://access.redhat.com/security/cve/CVE-2022-22624 https://access.redhat.com/security/cve/CVE-2022-22628 https://access.redhat.com/security/cve/CVE-2022-22629 https://access.redhat.com/security/cve/CVE-2022-22662 https://access.redhat.com/security/cve/CVE-2022-25308 https://access.redhat.com/security/cve/CVE-2022-25309 https://access.redhat.com/security/cve/CVE-2022-25310 https://access.redhat.com/security/cve/CVE-2022-26700 https://access.redhat.com/security/cve/CVE-2022-26709 https://access.redhat.com/security/cve/CVE-2022-26710 https://access.redhat.com/security/cve/CVE-2022-26716 https://access.redhat.com/security/cve/CVE-2022-26717 https://access.redhat.com/security/cve/CVE-2022-26719 https://access.redhat.com/security/cve/CVE-2022-27404 https://access.redhat.com/security/cve/CVE-2022-27405 https://access.redhat.com/security/cve/CVE-2022-27406 https://access.redhat.com/security/cve/CVE-2022-30293 https://access.redhat.com/security/cve/CVE-2022-35737 https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/cve/CVE-2022-41715 https://access.redhat.com/security/cve/CVE-2022-41717 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-43680 https://access.redhat.com/security/cve/CVE-2022-44617 https://access.redhat.com/security/cve/CVE-2022-46285 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2022-48303 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.7.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/
Security updates:
- CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements
- CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js
- CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function
- CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
Bug addressed:
-
ACM 2.7 images (BZ# 2116459)
-
Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation, which will be updated shortly for this release, for important instructions on installing this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2116459 - RHACM 2.7.0 images 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements 2159959 - CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js
- JIRA issues fixed (https://issues.jboss.org/):
MTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod MTA-106 - Implement ability for windup addon image pull policy to be configurable MTA-122 - MTA is upgrading automatically ignoring 'Manual' setting MTA-123 - MTA Becomes unusable when running bulk binary analysis MTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing MTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1 MTA-36 - Can't disable a proxy if it has an invalid configuration MTA-44 - Make RWX volumes optional. MTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct % MTA-59 - Getting error 401 if deleting many credentials quickly MTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter MTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6] MTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6] MTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6] MTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6] MTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-85 - CVE-2022-24999 mta-ui-container: express: "qs" prototype poisoning causes the hang of the node process [mta-6] MTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-96 - [UI] Maven -> "Local artifact repository" textbox can be checked and has no tooltip
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-12-13-8 watchOS 9.2
watchOS 9.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213536.
Accounts Available for: Apple Watch Series 4 and later Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)
AppleAVD Available for: Apple Watch Series 4 and later Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov
AppleMobileFileIntegrity Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing
CoreServices Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: Multiple issues were addressed by removing the vulnerable code. CVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of Offensive Security
ImageIO Available for: Apple Watch Series 4 and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)
IOHIDFamily Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)
IOMobileFrameBuffer Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)
iTunes Store Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An issue existed in the parsing of URLs. This issue was addressed with improved input validation. CVE-2022-42837: an anonymous researcher
Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero
Kernel Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab
Kernel Available for: Apple Watch Series 4 and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero
libxml2 Available for: Apple Watch Series 4 and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero
Safari Available for: Apple Watch Series 4 and later Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. This issue was addressed with improved input validation. CVE-2022-46695: KirtiKumar Anandrao Ramchandani
Software Update Available for: Apple Watch Series 4 and later Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. This issue was addressed with additional restrictions. CVE-2022-42849: Mickey Jin (@patch1t)
Weather Available for: Apple Watch Series 4 and later Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.
WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher
Additional recognition
Kernel We would like to acknowledge Zweig of Kunlun Lab for their assistance.
Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.
WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.
Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke NxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q 5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve vnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3 DNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2 GH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag piObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ sOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki PLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi ex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA FofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA W09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I= =DltD -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents 2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202210-0997", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "clustered data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "manageability sdk", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "libxml2", "scope": "lt", "trust": 1.0, "vendor": "xmlsoft", "version": "2.10.3" }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.7.2" }, { "model": "watchos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "9.2" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.6.2" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "tvos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "16.2" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.7.2" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.7.2" }, { "model": "snapmanager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.0" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0" }, { "model": "active iq unified manager", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "snapmanager", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "macos", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "h410c", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "watchos", "scope": "eq", "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": "9.2" }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "manageability sdk", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "clustered data ontap antivirus connector", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "libxml2", "scope": null, "trust": 0.8, "vendor": "xmlsoft", "version": null }, { "model": "tvos", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.10.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:netapp_manageability_sdk:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.7.2", "versionStartIncluding": "11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "16.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.7.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.7.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.6.2", "versionStartIncluding": "12.0", "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-40303" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "170956" }, { "db": "PACKETSTORM", "id": "170955" }, { "db": "PACKETSTORM", "id": "171310" }, { "db": "PACKETSTORM", "id": "170899" }, { "db": "PACKETSTORM", "id": "171144" }, { "db": "PACKETSTORM", "id": "171040" } ], "trust": 0.6 }, "cve": "CVE-2022-40303", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2022-40303", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-40303", "trust": 1.8, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An issue was discovered in libxml2 before 2.10.3. When parsing a multi-gigabyte XML document with the XML_PARSE_HUGE parser option enabled, several integer counters can overflow. This results in an attempt to access an array at a negative 2GB offset, typically leading to a segmentation fault. xmlsoft.org of libxml2 Products from other vendors contain integer overflow vulnerabilities.Service operation interruption (DoS) It may be in a state. libxml2 is an open source library for parsing XML documents. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. \n\nCVE-2022-40304\n\n Ned Williamson and Nathan Wachholz discovered a vulnerability when\n handling detection of entity reference cycles, which may result in\n corrupted dictionary entries. This flaw may lead to logic errors,\n including memory errors like double free flaws. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.9.10+dfsg-6.7+deb11u3. \n\nWe recommend that you upgrade your libxml2 packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-39\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: libxml2: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #877149\n ID: 202210-39\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in libxml2, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n==========\n\nlibxml2 is the XML C parser and toolkit developed for the GNOME project. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/libxml2 \u003c 2.10.3 \u003e= 2.10.3\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in libxml2. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll libxml2 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/libxml2-2.10.3\"\n\nReferences\n==========\n\n[ 1 ] CVE-2022-40303\n https://nvd.nist.gov/vuln/detail/CVE-2022-40303\n[ 2 ] CVE-2022-40304\n https://nvd.nist.gov/vuln/detail/CVE-2022-40304\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-39\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Description:\n\nVersion 1.27.0 of the OpenShift Serverless Operator is supported on Red Hat\nOpenShift Container Platform versions 4.8, 4.9, 4.10, 4.11 and 4.12. \n\nThis release includes security and bug fixes, and enhancements. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3397 - [Developer Console] \"parse error\" when testing with normal user\nLOG-3441 - [Administrator Console] Seeing \"parse error\" while using Severity filter for cluster view user\nLOG-3463 - [release-5.6] ElasticsearchError error=\"400 - Rejected by Elasticsearch\" when adding some labels in application namespaces\nLOG-3477 - [Logging 5.6.0]CLF raises \u0027invalid: unrecognized outputs: [default]\u0027 after adding `default` to outputRefs. \nLOG-3494 - [release-5.6] After querying logs in loki, compactor pod raises many TLS handshake error if retention policy is enabled. \nLOG-3496 - [release-5.6] LokiStack status is still \u0027Pending\u0027 when all loki components are running\nLOG-3510 - [release-5.6] TLS errors on Loki controller pod due to bad certificate\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift API for Data Protection (OADP) 1.1.2 security and bug fix update\nAdvisory ID: RHSA-2023:1174-01\nProduct: OpenShift API for Data Protection\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:1174\nIssue date: 2023-03-09\nCVE Names: CVE-2021-46848 CVE-2022-1122 CVE-2022-1304 \n CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 \n CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 \n CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 \n CVE-2022-2879 CVE-2022-2880 CVE-2022-2953 \n CVE-2022-4415 CVE-2022-4883 CVE-2022-22624 \n CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 \n CVE-2022-25308 CVE-2022-25309 CVE-2022-25310 \n CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 \n CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 \n CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 \n CVE-2022-30293 CVE-2022-35737 CVE-2022-40303 \n CVE-2022-40304 CVE-2022-41715 CVE-2022-41717 \n CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 \n CVE-2022-42898 CVE-2022-43680 CVE-2022-44617 \n CVE-2022-46285 CVE-2022-47629 CVE-2022-48303 \n=====================================================================\n\n1. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.2 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. \n\nSecurity Fix(es) from Bugzilla:\n\n* golang: archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879)\n\n* golang: net/http/httputil: ReverseProxy should not forward unparseable\nquery parameters (CVE-2022-2880)\n\n* golang: regexp/syntax: limit memory used by parsing regexps\n(CVE-2022-41715)\n\n* golang: net/http: An attacker can cause excessive memory growth in a Go\nserver accepting HTTP/2 requests (CVE-2022-41717)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-1056 - DPA fails validation if multiple BSLs have the same provider\nOADP-1150 - Handle docker env config changes in the oadp-operator\nOADP-1217 - update velero + restic to 1.9.5\nOADP-1256 - Backup stays in progress status after restic pod is restarted due to OOM killed\nOADP-1289 - Restore partially fails with error \"Secrets \\\"deployer-token-rrjqx\\\" not found\"\nOADP-290 - Remove creation/usage of velero-privileged SCC\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-46848\nhttps://access.redhat.com/security/cve/CVE-2022-1122\nhttps://access.redhat.com/security/cve/CVE-2022-1304\nhttps://access.redhat.com/security/cve/CVE-2022-2056\nhttps://access.redhat.com/security/cve/CVE-2022-2057\nhttps://access.redhat.com/security/cve/CVE-2022-2058\nhttps://access.redhat.com/security/cve/CVE-2022-2519\nhttps://access.redhat.com/security/cve/CVE-2022-2520\nhttps://access.redhat.com/security/cve/CVE-2022-2521\nhttps://access.redhat.com/security/cve/CVE-2022-2867\nhttps://access.redhat.com/security/cve/CVE-2022-2868\nhttps://access.redhat.com/security/cve/CVE-2022-2869\nhttps://access.redhat.com/security/cve/CVE-2022-2879\nhttps://access.redhat.com/security/cve/CVE-2022-2880\nhttps://access.redhat.com/security/cve/CVE-2022-2953\nhttps://access.redhat.com/security/cve/CVE-2022-4415\nhttps://access.redhat.com/security/cve/CVE-2022-4883\nhttps://access.redhat.com/security/cve/CVE-2022-22624\nhttps://access.redhat.com/security/cve/CVE-2022-22628\nhttps://access.redhat.com/security/cve/CVE-2022-22629\nhttps://access.redhat.com/security/cve/CVE-2022-22662\nhttps://access.redhat.com/security/cve/CVE-2022-25308\nhttps://access.redhat.com/security/cve/CVE-2022-25309\nhttps://access.redhat.com/security/cve/CVE-2022-25310\nhttps://access.redhat.com/security/cve/CVE-2022-26700\nhttps://access.redhat.com/security/cve/CVE-2022-26709\nhttps://access.redhat.com/security/cve/CVE-2022-26710\nhttps://access.redhat.com/security/cve/CVE-2022-26716\nhttps://access.redhat.com/security/cve/CVE-2022-26717\nhttps://access.redhat.com/security/cve/CVE-2022-26719\nhttps://access.redhat.com/security/cve/CVE-2022-27404\nhttps://access.redhat.com/security/cve/CVE-2022-27405\nhttps://access.redhat.com/security/cve/CVE-2022-27406\nhttps://access.redhat.com/security/cve/CVE-2022-30293\nhttps://access.redhat.com/security/cve/CVE-2022-35737\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/cve/CVE-2022-41715\nhttps://access.redhat.com/security/cve/CVE-2022-41717\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-43680\nhttps://access.redhat.com/security/cve/CVE-2022-44617\nhttps://access.redhat.com/security/cve/CVE-2022-46285\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2022-48303\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.7.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/\n\nSecurity updates:\n\n* CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML\nresponses containing multiple Assertion elements\n* CVE-2023-22467 luxon: Inefficient regular expression complexity in\nluxon.js\n* CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\nBug addressed:\n\n* ACM 2.7 images (BZ# 2116459)\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation, which will be updated shortly for this release, for\nimportant\ninstructions on installing this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2116459 - RHACM 2.7.0 images\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements\n2159959 - CVE-2023-22467 luxon: Inefficient regular expression complexity in luxon.js\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nMTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod\nMTA-106 - Implement ability for windup addon image pull policy to be configurable\nMTA-122 - MTA is upgrading automatically ignoring \u0027Manual\u0027 setting\nMTA-123 - MTA Becomes unusable when running bulk binary analysis\nMTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing \nMTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1\nMTA-36 - Can\u0027t disable a proxy if it has an invalid configuration\nMTA-44 - Make RWX volumes optional. \nMTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct %\nMTA-59 - Getting error 401 if deleting many credentials quickly\nMTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter\nMTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6]\nMTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6]\nMTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6]\nMTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6]\nMTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-85 - CVE-2022-24999 mta-ui-container: express: \"qs\" prototype poisoning causes the hang of the node process [mta-6]\nMTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-96 - [UI] Maven -\u003e \"Local artifact repository\" textbox can be checked and has no tooltip\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-8 watchOS 9.2\n\nwatchOS 9.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213536. \n\nAccounts\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: Apple Watch Series 4 and later\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nCoreServices\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: Multiple issues were addressed by removing the\nvulnerable code. \nCVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of\nOffensive Security\n\nImageIO\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\niTunes Store\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An issue existed in the parsing of URLs. This issue was\naddressed with improved input validation. \nCVE-2022-42837: an anonymous researcher\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: Apple Watch Series 4 and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nSafari\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. This\nissue was addressed with improved input validation. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: Apple Watch Series 4 and later\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. This\nissue was addressed with additional restrictions. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab for their\nassistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641 To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\". Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFX4ACgkQ4RjMIDke\nNxlyKA//eeU/txeqNxHM7JQE6xFrlla1tinQYMjbLhMgzdTbKpPjX8aHVqFfLB/Q\n5nH+NqrGs4HQwNQJ6fSiBIId0th71mgX7W3Noa1apzFh7Okl6IehczkAFB9OH7ve\nvnwiEECGU0hUNmbIi0s9HuuBo6eSNPFsJt0Jqn8ovV+F9bc+ftl/IRv6q2vg3rl3\nDNag62BCmCN4uXmqoJ4CKg7cNbddvma0bDbB1yYujxdmFwm4JGN6aittXE3WtPK2\nGH2/UxdZll8FR7Zegh1ziUcTaLR4dwHlXRFgc6WC8hqx6T8imNh1heAPwzhT+Iag\npiObDoMs7UYFKF/eQ8LUcl4hX8IOdLFO5I+BcvCzOcKqHutPqbE8QRU9yqjcQlsJ\nsOV7GT9W9J+QhibpIJbLVkkQp5djPZ8mLP0OKiRN1quEDWMrquPdM+r9ftJwEIki\nPLL/ur9c7geXCJCLzglMSMkNcoGZk77qzfJuPdoE0lD6zjdvBHalF5j8S0a1+9gi\nex3zU1I+ixqg7CvLNfkSjLcO9KOoPEFHnqEFrrO17QWWyraugrPgV0dMYArGRBpA\nFofYP6bXLv8eSUNuyOoQxF6kS4ChYgLUabl2NYqop9LoRWAtDAclTiabuvDJPfqA\nW09wxdhbpp2saxt8LlQjffzOmHJST6oHhHZiFiFswRM0q0nue6I=\n=DltD\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents\n2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2022-40303" }, { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "VULHUB", "id": "VHN-429429" }, { "db": "PACKETSTORM", "id": "169732" }, { "db": "PACKETSTORM", "id": "169620" }, { "db": "PACKETSTORM", "id": "170956" }, { "db": "PACKETSTORM", "id": "170955" }, { "db": "PACKETSTORM", "id": "171310" }, { "db": "PACKETSTORM", "id": "170899" }, { "db": "PACKETSTORM", "id": "171144" }, { "db": "PACKETSTORM", "id": "170318" }, { "db": "PACKETSTORM", "id": "171040" } ], "trust": 2.52 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-429429", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-429429" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-40303", "trust": 3.6 }, { "db": "JVN", "id": "JVNVU93250330", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99836374", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-102-08", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-04", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-10", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-165-06", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-023015", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "170318", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169620", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "170899", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "170955", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169732", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "171040", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "170317", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170316", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170753", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169857", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171016", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169825", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170555", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171173", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171043", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170752", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170096", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170312", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169858", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170097", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171042", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171017", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170754", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170315", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171260", "trust": 0.1 }, { "db": "CNNVD", "id": "CNNVD-202210-1031", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-429429", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170956", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171310", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171144", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-429429" }, { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "PACKETSTORM", "id": "169732" }, { "db": "PACKETSTORM", "id": "169620" }, { "db": "PACKETSTORM", "id": "170956" }, { "db": "PACKETSTORM", "id": "170955" }, { "db": "PACKETSTORM", "id": "171310" }, { "db": "PACKETSTORM", "id": "170899" }, { "db": "PACKETSTORM", "id": "171144" }, { "db": "PACKETSTORM", "id": "170318" }, { "db": "PACKETSTORM", "id": "171040" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "id": "VAR-202210-0997", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-429429" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:33:29.996000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "HT213535", "trust": 0.8, "url": "https://security.netapp.com/advisory/ntap-20221209-0003/" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-023015" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-190", "trust": 1.1 }, { "problemtype": "Integer overflow or wraparound (CWE-190) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-429429" }, { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.9, "url": "http://seclists.org/fulldisclosure/2022/dec/21" }, { "trust": 1.9, "url": "http://seclists.org/fulldisclosure/2022/dec/24" }, { "trust": 1.9, "url": "http://seclists.org/fulldisclosure/2022/dec/25" }, { "trust": 1.9, "url": "http://seclists.org/fulldisclosure/2022/dec/26" }, { "trust": 1.9, "url": "http://seclists.org/fulldisclosure/2022/dec/27" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20221209-0003/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213531" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213533" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213534" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213535" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213536" }, { "trust": 1.1, "url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/c846986356fc149915a74972bf198abc266bc2c0" }, { "trust": 1.1, "url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99836374/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu93250330/index.html" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-102-08" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-04" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-06" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-165-10" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-40304" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-40303" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-42011" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-42012" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-46848" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-35737" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-43680" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-42010" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-42898" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629" }, { "trust": 0.3, "url": "https://issues.jboss.org/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-47629" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-21835" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2879" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-21843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2880" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-41715" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-35065" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-4883" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-46175" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-44617" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-46285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25308" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2953" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2869" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27404" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2058" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25310" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25309" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-41717" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2521" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27405" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27406" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2056" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2868" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2520" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2867" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2519" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2057" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-41903" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/libxml2" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202210-39" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27664" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46175" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3821" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3821" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0634" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42898" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-44617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-48303" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4415" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1174" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1122" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1122" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25308" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-22467" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41912" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0630" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-22467" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41912" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42920" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36567" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37601" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3787" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2601" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-21830" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36567" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42866" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845" }, { "trust": 0.1, "url": "https://support.apple.com/en-us/ht201222." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht204641" }, { "trust": 0.1, "url": "https://support.apple.com/ht213536." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42837" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42859" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4238" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3064" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23947" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3064" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0802" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-23947" } ], "sources": [ { "db": "VULHUB", "id": "VHN-429429" }, { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "PACKETSTORM", "id": "169732" }, { "db": "PACKETSTORM", "id": "169620" }, { "db": "PACKETSTORM", "id": "170956" }, { "db": "PACKETSTORM", "id": "170955" }, { "db": "PACKETSTORM", "id": "171310" }, { "db": "PACKETSTORM", "id": "170899" }, { "db": "PACKETSTORM", "id": "171144" }, { "db": "PACKETSTORM", "id": "170318" }, { "db": "PACKETSTORM", "id": "171040" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-429429" }, { "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "db": "PACKETSTORM", "id": "169732" }, { "db": "PACKETSTORM", "id": "169620" }, { "db": "PACKETSTORM", "id": "170956" }, { "db": "PACKETSTORM", "id": "170955" }, { "db": "PACKETSTORM", "id": "171310" }, { "db": "PACKETSTORM", "id": "170899" }, { "db": "PACKETSTORM", "id": "171144" }, { "db": "PACKETSTORM", "id": "170318" }, { "db": "PACKETSTORM", "id": "171040" }, { "db": "NVD", "id": "CVE-2022-40303" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-23T00:00:00", "db": "VULHUB", "id": "VHN-429429" }, { "date": "2023-11-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "date": "2022-11-07T15:19:42", "db": "PACKETSTORM", "id": "169732" }, { "date": "2022-11-01T13:29:06", "db": "PACKETSTORM", "id": "169620" }, { "date": "2023-02-10T15:49:15", "db": "PACKETSTORM", "id": "170956" }, { "date": "2023-02-10T15:48:32", "db": "PACKETSTORM", "id": "170955" }, { "date": "2023-03-09T15:14:10", "db": "PACKETSTORM", "id": "171310" }, { "date": "2023-02-08T16:02:01", "db": "PACKETSTORM", "id": "170899" }, { "date": "2023-02-28T16:03:55", "db": "PACKETSTORM", "id": "171144" }, { "date": "2022-12-22T02:13:22", "db": "PACKETSTORM", "id": "170318" }, { "date": "2023-02-17T16:01:57", "db": "PACKETSTORM", "id": "171040" }, { "date": "2022-11-23T00:15:11.007000", "db": "NVD", "id": "CVE-2022-40303" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-11T00:00:00", "db": "VULHUB", "id": "VHN-429429" }, { "date": "2024-06-17T07:14:00", "db": "JVNDB", "id": "JVNDB-2022-023015" }, { "date": "2023-11-07T03:52:15.280000", "db": "NVD", "id": "CVE-2022-40303" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "xmlsoft.org\u00a0 of \u00a0libxml2\u00a0 Integer overflow vulnerability in products from other vendors", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-023015" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "arbitrary, code execution", "sources": [ { "db": "PACKETSTORM", "id": "169620" } ], "trust": 0.1 } }
var-202101-0564
Vulnerability from variot
A flaw exists in binutils in bfd/pef.c. An attacker who is able to submit a crafted PEF file to be parsed by objdump could cause a heap buffer overflow -> out-of-bounds read that could lead to an impact to application availability. This flaw affects binutils versions prior to 2.34. binutils There are input validation vulnerabilities, heap-based buffer overflow vulnerabilities, and out-of-bounds read vulnerabilities.Service operation interruption (DoS) It may be in a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202107-24
https://security.gentoo.org/
Severity: Normal Title: Binutils: Multiple vulnerabilities Date: July 10, 2021 Bugs: #678806, #761957, #764170 ID: 202107-24
Synopsis
Multiple vulnerabilities have been found in Binutils, the worst of which could result in a Denial of Service condition.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.35.2 >= 2.35.2
Description
Multiple vulnerabilities have been discovered in Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.35.2"
References
[ 1 ] CVE-2019-9070 https://nvd.nist.gov/vuln/detail/CVE-2019-9070 [ 2 ] CVE-2019-9071 https://nvd.nist.gov/vuln/detail/CVE-2019-9071 [ 3 ] CVE-2019-9072 https://nvd.nist.gov/vuln/detail/CVE-2019-9072 [ 4 ] CVE-2019-9073 https://nvd.nist.gov/vuln/detail/CVE-2019-9073 [ 5 ] CVE-2019-9074 https://nvd.nist.gov/vuln/detail/CVE-2019-9074 [ 6 ] CVE-2019-9075 https://nvd.nist.gov/vuln/detail/CVE-2019-9075 [ 7 ] CVE-2019-9076 https://nvd.nist.gov/vuln/detail/CVE-2019-9076 [ 8 ] CVE-2019-9077 https://nvd.nist.gov/vuln/detail/CVE-2019-9077 [ 9 ] CVE-2020-19599 https://nvd.nist.gov/vuln/detail/CVE-2020-19599 [ 10 ] CVE-2020-35448 https://nvd.nist.gov/vuln/detail/CVE-2020-35448 [ 11 ] CVE-2020-35493 https://nvd.nist.gov/vuln/detail/CVE-2020-35493 [ 12 ] CVE-2020-35494 https://nvd.nist.gov/vuln/detail/CVE-2020-35494 [ 13 ] CVE-2020-35495 https://nvd.nist.gov/vuln/detail/CVE-2020-35495 [ 14 ] CVE-2020-35496 https://nvd.nist.gov/vuln/detail/CVE-2020-35496 [ 15 ] CVE-2020-35507 https://nvd.nist.gov/vuln/detail/CVE-2020-35507
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202107-24
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0564", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": "lt", "trust": 1.0, "vendor": "gnu", "version": "2.34" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "hci compute node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "NVD", "id": "CVE-2020-35493" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.34", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-35493" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "163455" } ], "trust": 0.1 }, "cve": "CVE-2020-35493", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35493", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-377689", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.5, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35493", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-35493", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-099", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377689", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-35493", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377689" }, { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "NVD", "id": "CVE-2020-35493" }, { "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A flaw exists in binutils in bfd/pef.c. An attacker who is able to submit a crafted PEF file to be parsed by objdump could cause a heap buffer overflow -\u003e out-of-bounds read that could lead to an impact to application availability. This flaw affects binutils versions prior to 2.34. binutils There are input validation vulnerabilities, heap-based buffer overflow vulnerabilities, and out-of-bounds read vulnerabilities.Service operation interruption (DoS) It may be in a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202107-24\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Binutils: Multiple vulnerabilities\n Date: July 10, 2021\n Bugs: #678806, #761957, #764170\n ID: 202107-24\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Binutils, the worst of\nwhich could result in a Denial of Service condition. \n\nBackground\n==========\n\nThe GNU Binutils are a collection of tools to create, modify and\nanalyse binary files. Many of the files use BFD, the Binary File\nDescriptor library, to do low-level manipulation. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.35.2 \u003e= 2.35.2 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.35.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-9070\n https://nvd.nist.gov/vuln/detail/CVE-2019-9070\n[ 2 ] CVE-2019-9071\n https://nvd.nist.gov/vuln/detail/CVE-2019-9071\n[ 3 ] CVE-2019-9072\n https://nvd.nist.gov/vuln/detail/CVE-2019-9072\n[ 4 ] CVE-2019-9073\n https://nvd.nist.gov/vuln/detail/CVE-2019-9073\n[ 5 ] CVE-2019-9074\n https://nvd.nist.gov/vuln/detail/CVE-2019-9074\n[ 6 ] CVE-2019-9075\n https://nvd.nist.gov/vuln/detail/CVE-2019-9075\n[ 7 ] CVE-2019-9076\n https://nvd.nist.gov/vuln/detail/CVE-2019-9076\n[ 8 ] CVE-2019-9077\n https://nvd.nist.gov/vuln/detail/CVE-2019-9077\n[ 9 ] CVE-2020-19599\n https://nvd.nist.gov/vuln/detail/CVE-2020-19599\n[ 10 ] CVE-2020-35448\n https://nvd.nist.gov/vuln/detail/CVE-2020-35448\n[ 11 ] CVE-2020-35493\n https://nvd.nist.gov/vuln/detail/CVE-2020-35493\n[ 12 ] CVE-2020-35494\n https://nvd.nist.gov/vuln/detail/CVE-2020-35494\n[ 13 ] CVE-2020-35495\n https://nvd.nist.gov/vuln/detail/CVE-2020-35495\n[ 14 ] CVE-2020-35496\n https://nvd.nist.gov/vuln/detail/CVE-2020-35496\n[ 15 ] CVE-2020-35507\n https://nvd.nist.gov/vuln/detail/CVE-2020-35507\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202107-24\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n", "sources": [ { "db": "NVD", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "VULHUB", "id": "VHN-377689" }, { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "PACKETSTORM", "id": "163455" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-35493", "trust": 3.5 }, { "db": "PACKETSTORM", "id": "163455", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-017191", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202101-099", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.3660", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-377689", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-35493", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377689" }, { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35493" }, { "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "id": "VAR-202101-0564", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377689" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:33:27.715000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a025307 NetAppNetApp\u00a0Advisory", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "title": "GNU binutils Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=138354" }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-20", "trust": 1.1 }, { "problemtype": "Heap-based buffer overflow (CWE-122) [ others ]", "trust": 0.8 }, { "problemtype": " Out-of-bounds read (CWE-125) [ others ]", "trust": 0.8 }, { "problemtype": " Inappropriate input confirmation (CWE-20) [ others ]", "trust": 0.8 }, { "problemtype": "CWE-122", "trust": 0.1 }, { "problemtype": "CWE-125", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377689" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "NVD", "id": "CVE-2020-35493" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1911437" }, { "trust": 1.9, "url": "https://security.gentoo.org/glsa/202107-24" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210212-0007/" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35493" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3660" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/binutils-buffer-overflow-via-bfd-pef-parse-function-stub-34252" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics-for-nps/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163455/gentoo-linux-security-advisory-202107-24.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-performance-server/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/20.html" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/125.html" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/122.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2020-35493" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35495" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9071" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9077" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9073" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9072" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9074" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35507" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9070" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35496" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9076" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9075" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35494" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377689" }, { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35493" }, { "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377689" }, { "db": "VULMON", "id": "CVE-2020-35493" }, { "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35493" }, { "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULHUB", "id": "VHN-377689" }, { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2020-35493" }, { "date": "2022-06-29T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "date": "2021-07-11T12:01:11", "db": "PACKETSTORM", "id": "163455" }, { "date": "2021-01-04T15:15:12.777000", "db": "NVD", "id": "CVE-2020-35493" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-02T00:00:00", "db": "VULHUB", "id": "VHN-377689" }, { "date": "2022-09-02T00:00:00", "db": "VULMON", "id": "CVE-2020-35493" }, { "date": "2022-06-29T05:11:00", "db": "JVNDB", "id": "JVNDB-2020-017191" }, { "date": "2023-11-07T03:21:55.440000", "db": "NVD", "id": "CVE-2020-35493" }, { "date": "2022-09-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-099" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-099" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "binutils\u00a0 Input verification vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-017191" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "input validation error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-099" } ], "trust": 0.6 } }
var-202101-1926
Vulnerability from variot
Sudo before 1.9.5p2 contains an off-by-one error that can result in a heap-based buffer overflow, which allows privilege escalation to root via "sudoedit -s" and a command-line argument that ends with a single backslash character. Summary:
Red Hat Ansible Automation Platform Resource Operator 1.2 (technical preview) images that fix several security issues. Description:
Red Hat Ansible Automation Platform Resource Operator container images with security fixes.
Ansible Automation Platform manages Ansible Platform jobs and workflows that can interface with any infrastructure on a Red Hat OpenShift Container Platform cluster, or on a traditional infrastructure that is running off-cluster. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: sudo security update Advisory ID: RHSA-2021:0221-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:0221 Issue date: 2021-01-26 CVE Names: CVE-2021-3156 ==================================================================== 1. Summary:
An update for sudo is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The sudo packages contain the sudo utility which allows system administrators to provide certain users with the permission to execute privileged commands, which are used for system management purposes, without having to log in as root.
Security Fix(es):
- sudo: Heap buffer overflow in argument parsing (CVE-2021-3156)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1917684 - CVE-2021-3156 sudo: Heap buffer overflow in argument parsing
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: sudo-1.8.23-10.el7_9.1.src.rpm
x86_64: sudo-1.8.23-10.el7_9.1.x86_64.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: sudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm sudo-devel-1.8.23-10.el7_9.1.i686.rpm sudo-devel-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: sudo-1.8.23-10.el7_9.1.src.rpm
x86_64: sudo-1.8.23-10.el7_9.1.x86_64.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: sudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm sudo-devel-1.8.23-10.el7_9.1.i686.rpm sudo-devel-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: sudo-1.8.23-10.el7_9.1.src.rpm
ppc64: sudo-1.8.23-10.el7_9.1.ppc64.rpm sudo-debuginfo-1.8.23-10.el7_9.1.ppc64.rpm
ppc64le: sudo-1.8.23-10.el7_9.1.ppc64le.rpm sudo-debuginfo-1.8.23-10.el7_9.1.ppc64le.rpm
s390x: sudo-1.8.23-10.el7_9.1.s390x.rpm sudo-debuginfo-1.8.23-10.el7_9.1.s390x.rpm
x86_64: sudo-1.8.23-10.el7_9.1.x86_64.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: sudo-debuginfo-1.8.23-10.el7_9.1.ppc.rpm sudo-debuginfo-1.8.23-10.el7_9.1.ppc64.rpm sudo-devel-1.8.23-10.el7_9.1.ppc.rpm sudo-devel-1.8.23-10.el7_9.1.ppc64.rpm
ppc64le: sudo-debuginfo-1.8.23-10.el7_9.1.ppc64le.rpm sudo-devel-1.8.23-10.el7_9.1.ppc64le.rpm
s390x: sudo-debuginfo-1.8.23-10.el7_9.1.s390.rpm sudo-debuginfo-1.8.23-10.el7_9.1.s390x.rpm sudo-devel-1.8.23-10.el7_9.1.s390.rpm sudo-devel-1.8.23-10.el7_9.1.s390x.rpm
x86_64: sudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm sudo-devel-1.8.23-10.el7_9.1.i686.rpm sudo-devel-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: sudo-1.8.23-10.el7_9.1.src.rpm
x86_64: sudo-1.8.23-10.el7_9.1.x86_64.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: sudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm sudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm sudo-devel-1.8.23-10.el7_9.1.i686.rpm sudo-devel-1.8.23-10.el7_9.1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-3156 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/security/vulnerabilities/RHSB-2021-002
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYBB9QtzjgjWX9erEAQjMkQ/+PUDUX16Tnzqt7l1CsDAkHsT89EyY1keR 5XAlnrEv0nfw+/Feb2zhjlAlGbZSE1pTHOB4WarZzz2edZW5PRDw2SnljPToGoF2 6e4rlxRMJzFzc1WiOl5VgIq2LsOrqE1x3smwx7UGloMNmld/wgNKzFyddlR3ya0/ k78GAgUD2K/riILpeSG9M3jkK6IX/ecAOV8cK4GnmVAyrc/I0ud+wp+AFaQdKOUd DJ08C4ktxCEDZnCMV7X0fheoVB08T2VUPqM3AT0mP8Q07RWElFNAYYzS0/0ABGdd G/bRXDOiP0Qp92gMjWi4zu8JJk1Yyt8vnXII30gr2dd4f/8O0X6N+fntkhpc86N0 mdXrPNBDXC6YJqahqtTH3ZMNWj37kSX5O0QIxRMMySIuPEhLdkF0A4CBGcP1qpaN BQf/nNAvYlkz70QTL91JkUL98X0Ih+O6IAPxT//C90VXwXTb2+XmBBYjA24/gHJn kpv9ZzJfeCSCVoa019u3r/8pkMIfiN69GpO2FQTJCP4MbIJPHeANp2lYEA+KHPqE XJvy0qh3YEs741KxKwzbaMgOTrYsoMvKhVeJZm0t5bpU5Y5TTF9fCVan8uJ8ke6d buQej1iyBUvPq+gMQvJhwiP1Q2rvgxPmHP+L3Awo9tTqm6b7WsqdRq5K+B025v+d NdZXKIPEQVY=7/vM -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . These packages include redhat-release-virtualization-host. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
Bug Fix(es):
- When performing an upgrade of the Red Hat Virtualization Host using the
command
yum update
, the yum repository for RHV 4.3 EUS is unreachable
As a workaround, run the following command:
# yum update --releasever=7Server
(BZ#1899378)
- Bugs fixed (https://bugzilla.redhat.com/):
1889686 - CVE-2020-25684 dnsmasq: loose address/port check in reply_query() makes forging replies easier for an off-path attacker 1889688 - CVE-2020-25685 dnsmasq: loose query name check in reply_query() makes forging replies easier for an off-path attacker 1890125 - CVE-2020-25686 dnsmasq: multiple queries forwarded for the same name makes forging replies easier for an off-path attacker 1899378 - rhel-7-server-rhvh-4.3-eus-rpms repo is unavailable 1916111 - Rebase RHV-H 4.3 EUS on RHEL 7.9.z #3 1917684 - CVE-2021-3156 sudo: Heap buffer overflow in argument parsing
- ========================================================================== Ubuntu Security Notice USN-4705-2 January 27, 2021
sudo vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
- Ubuntu 12.04 ESM
Summary:
Several security issues were fixed in Sudo. This update provides the corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM.
Original advisory details:
It was discovered that Sudo incorrectly handled memory when parsing command lines. A local attacker could possibly use this issue to obtain unintended access to the administrator account. (CVE-2021-3156)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: sudo 1.8.9p5-1ubuntu1.5+esm6
Ubuntu 12.04 ESM: sudo 1.8.3p1-1ubuntu3.10
In general, a standard system update will make all the necessary changes. 8) - aarch64, ppc64le, s390x, x86_64
3
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-1926", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "tekelec platform distribution", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "7.7.1" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap tools", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": "9" }, { "model": "oncommand unified manager core package", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "micros workstation 6", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "610" }, { "model": "micros workstation 6", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "655" }, { "model": "communications performance intelligence center", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "10.4.0.3.1" }, { "model": "sudo", "scope": "eq", "trust": 1.0, "vendor": "sudo", "version": "1.9.5" }, { "model": "micros compact workstation 3", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "310" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.17" }, { "model": "tekelec platform distribution", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "7.4.0" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "privilege management for mac", "scope": "lt", "trust": 1.0, "vendor": "beyondtrust", "version": "21.1.1" }, { "model": "communications performance intelligence center", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "10.4.0.1.0" }, { "model": "micros workstation 5a", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5a" }, { "model": "sudo", "scope": "gte", "trust": 1.0, "vendor": "sudo", "version": "1.9.0" }, { "model": "skynas", "scope": "eq", "trust": 1.0, "vendor": "synology", "version": null }, { "model": "privilege management for unix\\/linux", "scope": "lt", "trust": 1.0, "vendor": "beyondtrust", "version": "10.3.2-10" }, { "model": "diskstation manager", "scope": "eq", "trust": 1.0, "vendor": "synology", "version": "6.2" }, { "model": "diskstation manager unified controller", "scope": "eq", "trust": 1.0, "vendor": "synology", "version": "3.0" }, { "model": "communications performance intelligence center", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "10.3.0.2.1" }, { "model": "micros es400", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "400" }, { "model": "sudo", "scope": "lt", "trust": 1.0, "vendor": "sudo", "version": "1.9.5" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "communications performance intelligence center", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "10.3.0.0.0" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.8" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "vs960hd", "scope": "eq", "trust": 1.0, "vendor": "synology", "version": null }, { "model": "micros es400", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "410" }, { "model": "micros kitchen display system", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "210" }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sudo", "scope": "gte", "trust": 1.0, "vendor": "sudo", "version": "1.8.2" }, { "model": "sudo", "scope": "lt", "trust": 1.0, "vendor": "sudo", "version": "1.8.32" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.0.4" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3156" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:sudo_project:sudo:1.9.5:patch1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sudo_project:sudo:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.9.5", "versionStartIncluding": "1.9.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sudo_project:sudo:1.9.5:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sudo_project:sudo:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.8.32", "versionStartIncluding": "1.8.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_unified_manager_core_package:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.17:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.0.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:synology:diskstation_manager:6.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:synology:diskstation_manager_unified_controller:3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:synology:skynas_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:synology:skynas:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:synology:vs960hd_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:synology:vs960hd:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:beyondtrust:privilege_management_for_mac:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "21.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:beyondtrust:privilege_management_for_unix\\/linux:*:*:*:*:basic:*:*:*", "cpe_name": [], "versionEndExcluding": "10.3.2-10", "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:micros_compact_workstation_3_firmware:310:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:oracle:micros_compact_workstation_3:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:micros_es400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "410", "versionStartIncluding": "400", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:oracle:micros_es400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:micros_kitchen_display_system_firmware:210:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:oracle:micros_kitchen_display_system:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:micros_workstation_5a_firmware:5a:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:oracle:micros_workstation_5a:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:oracle:micros_workstation_6_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "655", "versionStartIncluding": "610", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:oracle:micros_workstation_6:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:tekelec_platform_distribution:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.7.1", "versionStartIncluding": "7.4.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_performance_intelligence_center:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "10.4.0.3.1", "versionStartIncluding": "10.4.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_performance_intelligence_center:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "10.3.0.2.1", "versionStartIncluding": "10.3.0.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3156" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161272" }, { "db": "PACKETSTORM", "id": "161138" } ], "trust": 0.5 }, "cve": "CVE-2021-3156", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "id": "VHN-383931", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3156", "trust": 1.0, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-383931", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-383931" }, { "db": "NVD", "id": "CVE-2021-3156" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Sudo before 1.9.5p2 contains an off-by-one error that can result in a heap-based buffer overflow, which allows privilege escalation to root via \"sudoedit -s\" and a command-line argument that ends with a single backslash character. Summary:\n\nRed Hat Ansible Automation Platform Resource Operator 1.2 (technical\npreview) images that fix several security issues. Description:\n\nRed Hat Ansible Automation Platform Resource Operator container images\nwith security fixes. \n\nAnsible Automation Platform manages Ansible Platform jobs and workflows\nthat can interface with any infrastructure on a Red Hat OpenShift Container\nPlatform cluster, or on a traditional infrastructure that is running\noff-cluster. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: sudo security update\nAdvisory ID: RHSA-2021:0221-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:0221\nIssue date: 2021-01-26\nCVE Names: CVE-2021-3156\n====================================================================\n1. Summary:\n\nAn update for sudo is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe sudo packages contain the sudo utility which allows system\nadministrators to provide certain users with the permission to execute\nprivileged commands, which are used for system management purposes, without\nhaving to log in as root. \n\nSecurity Fix(es):\n\n* sudo: Heap buffer overflow in argument parsing (CVE-2021-3156)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1917684 - CVE-2021-3156 sudo: Heap buffer overflow in argument parsing\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nsudo-1.8.23-10.el7_9.1.src.rpm\n\nx86_64:\nsudo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nsudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-devel-1.8.23-10.el7_9.1.i686.rpm\nsudo-devel-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nsudo-1.8.23-10.el7_9.1.src.rpm\n\nx86_64:\nsudo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nsudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-devel-1.8.23-10.el7_9.1.i686.rpm\nsudo-devel-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nsudo-1.8.23-10.el7_9.1.src.rpm\n\nppc64:\nsudo-1.8.23-10.el7_9.1.ppc64.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.ppc64.rpm\n\nppc64le:\nsudo-1.8.23-10.el7_9.1.ppc64le.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.ppc64le.rpm\n\ns390x:\nsudo-1.8.23-10.el7_9.1.s390x.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.s390x.rpm\n\nx86_64:\nsudo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nsudo-debuginfo-1.8.23-10.el7_9.1.ppc.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.ppc64.rpm\nsudo-devel-1.8.23-10.el7_9.1.ppc.rpm\nsudo-devel-1.8.23-10.el7_9.1.ppc64.rpm\n\nppc64le:\nsudo-debuginfo-1.8.23-10.el7_9.1.ppc64le.rpm\nsudo-devel-1.8.23-10.el7_9.1.ppc64le.rpm\n\ns390x:\nsudo-debuginfo-1.8.23-10.el7_9.1.s390.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.s390x.rpm\nsudo-devel-1.8.23-10.el7_9.1.s390.rpm\nsudo-devel-1.8.23-10.el7_9.1.s390x.rpm\n\nx86_64:\nsudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-devel-1.8.23-10.el7_9.1.i686.rpm\nsudo-devel-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nsudo-1.8.23-10.el7_9.1.src.rpm\n\nx86_64:\nsudo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nsudo-debuginfo-1.8.23-10.el7_9.1.i686.rpm\nsudo-debuginfo-1.8.23-10.el7_9.1.x86_64.rpm\nsudo-devel-1.8.23-10.el7_9.1.i686.rpm\nsudo-devel-1.8.23-10.el7_9.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-3156\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/security/vulnerabilities/RHSB-2021-002\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYBB9QtzjgjWX9erEAQjMkQ/+PUDUX16Tnzqt7l1CsDAkHsT89EyY1keR\n5XAlnrEv0nfw+/Feb2zhjlAlGbZSE1pTHOB4WarZzz2edZW5PRDw2SnljPToGoF2\n6e4rlxRMJzFzc1WiOl5VgIq2LsOrqE1x3smwx7UGloMNmld/wgNKzFyddlR3ya0/\nk78GAgUD2K/riILpeSG9M3jkK6IX/ecAOV8cK4GnmVAyrc/I0ud+wp+AFaQdKOUd\nDJ08C4ktxCEDZnCMV7X0fheoVB08T2VUPqM3AT0mP8Q07RWElFNAYYzS0/0ABGdd\nG/bRXDOiP0Qp92gMjWi4zu8JJk1Yyt8vnXII30gr2dd4f/8O0X6N+fntkhpc86N0\nmdXrPNBDXC6YJqahqtTH3ZMNWj37kSX5O0QIxRMMySIuPEhLdkF0A4CBGcP1qpaN\nBQf/nNAvYlkz70QTL91JkUL98X0Ih+O6IAPxT//C90VXwXTb2+XmBBYjA24/gHJn\nkpv9ZzJfeCSCVoa019u3r/8pkMIfiN69GpO2FQTJCP4MbIJPHeANp2lYEA+KHPqE\nXJvy0qh3YEs741KxKwzbaMgOTrYsoMvKhVeJZm0t5bpU5Y5TTF9fCVan8uJ8ke6d\nbuQej1iyBUvPq+gMQvJhwiP1Q2rvgxPmHP+L3Awo9tTqm6b7WsqdRq5K+B025v+d\nNdZXKIPEQVY=7/vM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. These packages include redhat-release-virtualization-host. \nRHVH features a Cockpit user interface for monitoring the host\u0027s resources\nand performing administrative tasks. \n\nBug Fix(es):\n\n* When performing an upgrade of the Red Hat Virtualization Host using the\ncommand `yum update`, the yum repository for RHV 4.3 EUS is unreachable\n\nAs a workaround, run the following command:\n`# yum update --releasever=7Server` (BZ#1899378)\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1889686 - CVE-2020-25684 dnsmasq: loose address/port check in reply_query() makes forging replies easier for an off-path attacker\n1889688 - CVE-2020-25685 dnsmasq: loose query name check in reply_query() makes forging replies easier for an off-path attacker\n1890125 - CVE-2020-25686 dnsmasq: multiple queries forwarded for the same name makes forging replies easier for an off-path attacker\n1899378 - rhel-7-server-rhvh-4.3-eus-rpms repo is unavailable\n1916111 - Rebase RHV-H 4.3 EUS on RHEL 7.9.z #3\n1917684 - CVE-2021-3156 sudo: Heap buffer overflow in argument parsing\n\n6. ==========================================================================\nUbuntu Security Notice USN-4705-2\nJanuary 27, 2021\n\nsudo vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in Sudo. This update provides\nthe corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM. \n\nOriginal advisory details:\n\n It was discovered that Sudo incorrectly handled memory when parsing command\n lines. A local attacker could possibly use this issue to obtain unintended\n access to the administrator account. (CVE-2021-3156)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n sudo 1.8.9p5-1ubuntu1.5+esm6\n\nUbuntu 12.04 ESM:\n sudo 1.8.3p1-1ubuntu3.10\n\nIn general, a standard system update will make all the necessary changes. 8) - aarch64, ppc64le, s390x, x86_64\n\n3", "sources": [ { "db": "NVD", "id": "CVE-2021-3156" }, { "db": "VULHUB", "id": "VHN-383931" }, { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161272" }, { "db": "PACKETSTORM", "id": "161163" }, { "db": "PACKETSTORM", "id": "161138" } ], "trust": 1.53 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-383931", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-383931" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3156", "trust": 1.7 }, { "db": "PACKETSTORM", "id": "161230", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "161160", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "161270", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "161293", "trust": 1.1 }, { "db": "MCAFEE", "id": "SB10348", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/01/27/2", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/01/26/3", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/02/15/1", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/01/27/1", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/09/14/2", "trust": 1.1 }, { "db": "CERT/CC", "id": "VU#794544", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2024/01/30/8", "trust": 1.0 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2024/01/30/6", "trust": 1.0 }, { "db": "PACKETSTORM", "id": "176932", "trust": 1.0 }, { "db": "PACKETSTORM", "id": "161163", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161143", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161138", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161272", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161139", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "161141", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161152", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161144", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161140", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161142", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161398", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161136", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161135", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161281", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161137", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161145", "trust": 0.1 }, { "db": "SEEBUG", "id": "SSVID-99117", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-383931", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162142", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-383931" }, { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161272" }, { "db": "PACKETSTORM", "id": "161163" }, { "db": "PACKETSTORM", "id": "161138" }, { "db": "NVD", "id": "CVE-2021-3156" } ] }, "id": "VAR-202101-1926", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-383931" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:13:02.874000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-193", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-383931" }, { "db": "NVD", "id": "CVE-2021-3156" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.2, "url": "http://www.openwall.com/lists/oss-security/2021/01/26/3" }, { "trust": 1.1, "url": "https://www.kb.cert.org/vuls/id/794544" }, { "trust": 1.1, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-sudo-privesc-jan2021-qnyqfcm" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210128-0001/" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210128-0002/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212177" }, { "trust": 1.1, "url": "https://www.sudo.ws/stable.html#1.9.5p2" }, { "trust": 1.1, "url": "https://www.synology.com/security/advisory/synology_sa_21_02" }, { "trust": 1.1, "url": "https://www.debian.org/security/2021/dsa-4839" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/jan/79" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/feb/42" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202101-33" }, { "trust": 1.1, "url": "http://packetstormsecurity.com/files/161160/sudo-heap-based-buffer-overflow.html" }, { "trust": 1.1, "url": "http://packetstormsecurity.com/files/161230/sudo-buffer-overflow-privilege-escalation.html" }, { "trust": 1.1, "url": "http://packetstormsecurity.com/files/161270/sudo-1.9.5p1-buffer-overflow-privilege-escalation.html" }, { "trust": 1.1, "url": "http://packetstormsecurity.com/files/161293/sudo-1.8.31p2-1.9.5p1-buffer-overflow.html" }, { "trust": 1.1, "url": "https://www.beyondtrust.com/blog/entry/security-advisory-privilege-management-for-unix-linux-pmul-basic-and-privilege-management-for-mac-pmm-affected-by-sudo-vulnerability" }, { "trust": 1.1, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2021/01/msg00022.html" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/01/27/1" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/01/27/2" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/02/15/1" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/09/14/2" }, { "trust": 1.0, "url": "http://packetstormsecurity.com/files/176932/glibc-syslog-heap-based-buffer-overflow.html" }, { "trust": 1.0, "url": "http://seclists.org/fulldisclosure/2024/feb/3" }, { "trust": 1.0, "url": "http://www.openwall.com/lists/oss-security/2024/01/30/6" }, { "trust": 1.0, "url": "http://www.openwall.com/lists/oss-security/2024/01/30/8" }, { "trust": 1.0, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10348" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cala5ftxiqbrryua2zqnjxb6oqmaxeii/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/lhxk6ico5aylgfk2tax5mzkuxtukwojy/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3156" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3156" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-002" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10348" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/lhxk6ico5aylgfk2tax5mzkuxtukwojy/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/cala5ftxiqbrryua2zqnjxb6oqmaxeii/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0225" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20907" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1079" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-12652" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12401" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-11719" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17546" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17023" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-6829" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12403" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20388" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3447" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-11756" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12243" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12400" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-11727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1971" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20180" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5188" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5094" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-5313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17498" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14422" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20178" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12402" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0221" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25686" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25685" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25685" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25686" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25684" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0395" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4705-2" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4705-1" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0218" } ], "sources": [ { "db": "VULHUB", "id": "VHN-383931" }, { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161272" }, { "db": "PACKETSTORM", "id": "161163" }, { "db": "PACKETSTORM", "id": "161138" }, { "db": "NVD", "id": "CVE-2021-3156" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-383931" }, { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161272" }, { "db": "PACKETSTORM", "id": "161163" }, { "db": "PACKETSTORM", "id": "161138" }, { "db": "NVD", "id": "CVE-2021-3156" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-26T00:00:00", "db": "VULHUB", "id": "VHN-383931" }, { "date": "2021-01-27T14:06:12", "db": "PACKETSTORM", "id": "161139" }, { "date": "2021-04-09T15:06:13", "db": "PACKETSTORM", "id": "162142" }, { "date": "2021-01-27T14:06:46", "db": "PACKETSTORM", "id": "161143" }, { "date": "2021-02-03T16:22:29", "db": "PACKETSTORM", "id": "161272" }, { "date": "2021-01-28T13:59:34", "db": "PACKETSTORM", "id": "161163" }, { "date": "2021-01-27T14:06:02", "db": "PACKETSTORM", "id": "161138" }, { "date": "2021-01-26T21:15:12.987000", "db": "NVD", "id": "CVE-2021-3156" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-03T00:00:00", "db": "VULHUB", "id": "VHN-383931" }, { "date": "2024-07-09T18:27:53.967000", "db": "NVD", "id": "CVE-2021-3156" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "PACKETSTORM", "id": "161163" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-0225-01", "sources": [ { "db": "PACKETSTORM", "id": "161139" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow, root", "sources": [ { "db": "PACKETSTORM", "id": "161139" }, { "db": "PACKETSTORM", "id": "161143" }, { "db": "PACKETSTORM", "id": "161138" } ], "trust": 0.3 } }
var-202103-0479
Vulnerability from variot
There is an open race window when writing output in the following utilities in GNU binutils version 2.35 and earlier:ar, objcopy, strip, ranlib. When these utilities are run as a privileged user (presumably as part of a script updating binaries across different users), an unprivileged user can trick these utilities into getting ownership of arbitrary files through a symlink. GNU binutils There is a link interpretation vulnerability in.Information may be obtained and information may be tampered with. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. An access control error vulnerability exists in GNU binutils that allows smart_rename() to bypass access restrictions, allowing an attacker to read or change data. Bugs fixed (https://bugzilla.redhat.com/):
2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: binutils security update Advisory ID: RHSA-2021:4364-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:4364 Issue date: 2021-11-09 CVE Names: CVE-2020-35448 CVE-2021-3487 CVE-2021-20197 CVE-2021-20284 ==================================================================== 1. Summary:
An update for binutils is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The binutils packages provide a collection of binary utilities for the manipulation of object code in various object file formats. It includes the ar, as, gprof, ld, nm, objcopy, objdump, ranlib, readelf, size, strings, strip, and addr2line utilities.
Security Fix(es):
-
binutils: Excessive debug section size can cause excessive memory consumption in bfd's dwarf2.c read_section() (CVE-2021-3487)
-
binutils: Race window allows users to own arbitrary files (CVE-2021-20197)
-
binutils: Heap-based buffer overflow in bfd_getl_signed_32() in libbfd.c because sh_entsize is not validated in _bfd_elf_slurp_secondary_reloc_section() in elf.c (CVE-2020-35448)
-
binutils: Heap-based buffer overflow in _bfd_elf_slurp_secondary_reloc_section in elf.c (CVE-2021-20284)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1913743 - CVE-2021-20197 binutils: Race window allows users to own arbitrary files 1924068 - binutils debuginfo misses code for bfd functions 1930988 - Backport breaks building with LTO 1935785 - Linker garbage collection removes weak alias references (possibly "regression" of bz1804325) 1937784 - CVE-2021-20284 binutils: Heap-based buffer overflow in _bfd_elf_slurp_secondary_reloc_section in elf.c 1946518 - binutils-2.30-98 are causing go binaries to crash due to segmentation fault on aarch64 1946977 - pthread_join segfaults in stack unwinding 1947111 - CVE-2021-3487 binutils: Excessive debug section size can cause excessive memory consumption in bfd's dwarf2.c read_section() 1950478 - CVE-2020-35448 binutils: Heap-based buffer overflow in bfd_getl_signed_32() in libbfd.c because sh_entsize is not validated in _bfd_elf_slurp_secondary_reloc_section() in elf.c 1969775 - /usr/bin/ld: Dwarf Error: Offset (2487097600) greater than or equal to .debug_str size (571933).
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
aarch64: binutils-debuginfo-2.30-108.el8.aarch64.rpm binutils-debugsource-2.30-108.el8.aarch64.rpm binutils-devel-2.30-108.el8.aarch64.rpm
ppc64le: binutils-debuginfo-2.30-108.el8.ppc64le.rpm binutils-debugsource-2.30-108.el8.ppc64le.rpm binutils-devel-2.30-108.el8.ppc64le.rpm
s390x: binutils-debuginfo-2.30-108.el8.s390x.rpm binutils-debugsource-2.30-108.el8.s390x.rpm binutils-devel-2.30-108.el8.s390x.rpm
x86_64: binutils-debuginfo-2.30-108.el8.i686.rpm binutils-debuginfo-2.30-108.el8.x86_64.rpm binutils-debugsource-2.30-108.el8.i686.rpm binutils-debugsource-2.30-108.el8.x86_64.rpm binutils-devel-2.30-108.el8.i686.rpm binutils-devel-2.30-108.el8.x86_64.rpm
Red Hat Enterprise Linux BaseOS (v. 8):
Source: binutils-2.30-108.el8.src.rpm
aarch64: binutils-2.30-108.el8.aarch64.rpm binutils-debuginfo-2.30-108.el8.aarch64.rpm binutils-debugsource-2.30-108.el8.aarch64.rpm
ppc64le: binutils-2.30-108.el8.ppc64le.rpm binutils-debuginfo-2.30-108.el8.ppc64le.rpm binutils-debugsource-2.30-108.el8.ppc64le.rpm
s390x: binutils-2.30-108.el8.s390x.rpm binutils-debuginfo-2.30-108.el8.s390x.rpm binutils-debugsource-2.30-108.el8.s390x.rpm
x86_64: binutils-2.30-108.el8.x86_64.rpm binutils-debuginfo-2.30-108.el8.x86_64.rpm binutils-debugsource-2.30-108.el8.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2020-35448 https://access.redhat.com/security/cve/CVE-2021-3487 https://access.redhat.com/security/cve/CVE-2021-20197 https://access.redhat.com/security/cve/CVE-2021-20284 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYYrcWdzjgjWX9erEAQgOuA//ddTY+J3xDL8Z+2Gi+qcbItkoW0B8nKrt hqWmx6c/KlhAtLnAbIh18N+1uPMAXGNZcKHtCJfFSIAP3B71jDBqA+CRqlhiapmg ze4qYNpUwBg0e2c/6w0V5GYhIXpdsyiKXTpjmnaxnzW61tiCCWFBZoWpzJjSId1X yR7vHjDaXT1CZl0fHS/5Y9NfK/7jjgkJv7U7wcUxEsy6bMQIzM0nMLZauVmIrsC0 vu1bhQifEJH1mnoykfnlRVSEe+qGMrEtnOCnos8GTGChmVt4bgogpb5oE4JFm+bs ufjpRwSC1X5XRv9aqTX/ixIFLCeFpZkYhFLUlZqYHNKRcRlcqz5MLFA6KYdTj9zt 2ygqd5o26ml7gVHyA+BGE/pzd5m9YTzNvrWbC/ZV6loHM1nHUIBW/Y+hneSWTCkH x1LCmTnYxyPz0ZjySbCy03SJPrRewe/xPlxJlCmqLfVh+hEvCHsSw9hnYC3+pvMB xIl5HNf34dc/lJsPXo65owsDNcTlKF7gfVG3eKjcNnu1Uh9LzCYG8PKMtougZgV3 mAviF8MhgWVLXJTo6BXtF605ivViFoyis0bFJCV6uihV+nfAesWVN3rnIeDMh2sV EA9zQyxzy2nQsDMJ4eLV5ckrl7YzGsJt+B9jwLXbGkpjQm+bCrds41k9gLjQEiHE Vm3qGf43D60+Ds -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- Gentoo Linux Security Advisory GLSA 202208-30
https://security.gentoo.org/
Severity: Normal Title: GNU Binutils: Multiple Vulnerabilities Date: August 14, 2022 Bugs: #778545, #792342, #829304 ID: 202208-30
Synopsis
Multiple vulnerabilities have been discovered in Binutils, the worst of which could result in denial of service.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.38 >= 2.38 2 sys-libs/binutils-libs < 2.38 >= 2.38
Description
Multiple vulnerabilities have been discovered in GNU Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.38"
All Binutils library users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-libs/binutils-libs-2.38"
References
[ 1 ] CVE-2021-3487 https://nvd.nist.gov/vuln/detail/CVE-2021-3487 [ 2 ] CVE-2021-3530 https://nvd.nist.gov/vuln/detail/CVE-2021-3530 [ 3 ] CVE-2021-3549 https://nvd.nist.gov/vuln/detail/CVE-2021-3549 [ 4 ] CVE-2021-20197 https://nvd.nist.gov/vuln/detail/CVE-2021-20197 [ 5 ] CVE-2021-20284 https://nvd.nist.gov/vuln/detail/CVE-2021-20284 [ 6 ] CVE-2021-20294 https://nvd.nist.gov/vuln/detail/CVE-2021-20294 [ 7 ] CVE-2021-45078 https://nvd.nist.gov/vuln/detail/CVE-2021-45078
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202208-30
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-0479", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "binutils", "scope": "lte", "trust": 1.0, "vendor": "gnu", "version": "2.35" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "NVD", "id": "CVE-2021-20197" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "2.35", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-20197" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164821" }, { "db": "PACKETSTORM", "id": "164967" } ], "trust": 0.3 }, "cve": "CVE-2021-20197", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "LOCAL", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 3.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 3.4, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "LOW", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:L/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Local", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 3.3, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2021-20197", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Low", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:L/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "LOCAL", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "NONE", "baseScore": 3.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 3.4, "id": "VHN-377873", "impactScore": 4.9, "integrityImpact": "PARTIAL", "severity": "LOW", "trust": 0.1, "vectorString": "AV:L/AC:M/AU:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.0, "impactScore": 5.2, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Local", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.3, "baseSeverity": "Medium", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2021-20197", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "Low", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-20197", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202102-649", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377873", "trust": 0.1, "value": "LOW" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377873" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "NVD", "id": "CVE-2021-20197" }, { "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There is an open race window when writing output in the following utilities in GNU binutils version 2.35 and earlier:ar, objcopy, strip, ranlib. When these utilities are run as a privileged user (presumably as part of a script updating binaries across different users), an unprivileged user can trick these utilities into getting ownership of arbitrary files through a symlink. GNU binutils There is a link interpretation vulnerability in.Information may be obtained and information may be tampered with. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. An access control error vulnerability exists in GNU binutils that allows smart_rename() to bypass access restrictions, allowing an attacker to read or change data. Bugs fixed (https://bugzilla.redhat.com/):\n\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: binutils security update\nAdvisory ID: RHSA-2021:4364-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4364\nIssue date: 2021-11-09\nCVE Names: CVE-2020-35448 CVE-2021-3487 CVE-2021-20197\n CVE-2021-20284\n====================================================================\n1. Summary:\n\nAn update for binutils is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe binutils packages provide a collection of binary utilities for the\nmanipulation of object code in various object file formats. It includes the\nar, as, gprof, ld, nm, objcopy, objdump, ranlib, readelf, size, strings,\nstrip, and addr2line utilities. \n\nSecurity Fix(es):\n\n* binutils: Excessive debug section size can cause excessive memory\nconsumption in bfd\u0027s dwarf2.c read_section() (CVE-2021-3487)\n\n* binutils: Race window allows users to own arbitrary files\n(CVE-2021-20197)\n\n* binutils: Heap-based buffer overflow in bfd_getl_signed_32() in libbfd.c\nbecause sh_entsize is not validated in\n_bfd_elf_slurp_secondary_reloc_section() in elf.c (CVE-2020-35448)\n\n* binutils: Heap-based buffer overflow in\n_bfd_elf_slurp_secondary_reloc_section in elf.c (CVE-2021-20284)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913743 - CVE-2021-20197 binutils: Race window allows users to own arbitrary files\n1924068 - binutils debuginfo misses code for bfd functions\n1930988 - Backport breaks building with LTO\n1935785 - Linker garbage collection removes weak alias references (possibly \"regression\" of bz1804325)\n1937784 - CVE-2021-20284 binutils: Heap-based buffer overflow in _bfd_elf_slurp_secondary_reloc_section in elf.c\n1946518 - binutils-2.30-98 are causing go binaries to crash due to segmentation fault on aarch64\n1946977 - pthread_join segfaults in stack unwinding\n1947111 - CVE-2021-3487 binutils: Excessive debug section size can cause excessive memory consumption in bfd\u0027s dwarf2.c read_section()\n1950478 - CVE-2020-35448 binutils: Heap-based buffer overflow in bfd_getl_signed_32() in libbfd.c because sh_entsize is not validated in _bfd_elf_slurp_secondary_reloc_section() in elf.c\n1969775 - /usr/bin/ld: Dwarf Error: Offset (2487097600) greater than or equal to .debug_str size (571933). \n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\naarch64:\nbinutils-debuginfo-2.30-108.el8.aarch64.rpm\nbinutils-debugsource-2.30-108.el8.aarch64.rpm\nbinutils-devel-2.30-108.el8.aarch64.rpm\n\nppc64le:\nbinutils-debuginfo-2.30-108.el8.ppc64le.rpm\nbinutils-debugsource-2.30-108.el8.ppc64le.rpm\nbinutils-devel-2.30-108.el8.ppc64le.rpm\n\ns390x:\nbinutils-debuginfo-2.30-108.el8.s390x.rpm\nbinutils-debugsource-2.30-108.el8.s390x.rpm\nbinutils-devel-2.30-108.el8.s390x.rpm\n\nx86_64:\nbinutils-debuginfo-2.30-108.el8.i686.rpm\nbinutils-debuginfo-2.30-108.el8.x86_64.rpm\nbinutils-debugsource-2.30-108.el8.i686.rpm\nbinutils-debugsource-2.30-108.el8.x86_64.rpm\nbinutils-devel-2.30-108.el8.i686.rpm\nbinutils-devel-2.30-108.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nbinutils-2.30-108.el8.src.rpm\n\naarch64:\nbinutils-2.30-108.el8.aarch64.rpm\nbinutils-debuginfo-2.30-108.el8.aarch64.rpm\nbinutils-debugsource-2.30-108.el8.aarch64.rpm\n\nppc64le:\nbinutils-2.30-108.el8.ppc64le.rpm\nbinutils-debuginfo-2.30-108.el8.ppc64le.rpm\nbinutils-debugsource-2.30-108.el8.ppc64le.rpm\n\ns390x:\nbinutils-2.30-108.el8.s390x.rpm\nbinutils-debuginfo-2.30-108.el8.s390x.rpm\nbinutils-debugsource-2.30-108.el8.s390x.rpm\n\nx86_64:\nbinutils-2.30-108.el8.x86_64.rpm\nbinutils-debuginfo-2.30-108.el8.x86_64.rpm\nbinutils-debugsource-2.30-108.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-35448\nhttps://access.redhat.com/security/cve/CVE-2021-3487\nhttps://access.redhat.com/security/cve/CVE-2021-20197\nhttps://access.redhat.com/security/cve/CVE-2021-20284\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYYrcWdzjgjWX9erEAQgOuA//ddTY+J3xDL8Z+2Gi+qcbItkoW0B8nKrt\nhqWmx6c/KlhAtLnAbIh18N+1uPMAXGNZcKHtCJfFSIAP3B71jDBqA+CRqlhiapmg\nze4qYNpUwBg0e2c/6w0V5GYhIXpdsyiKXTpjmnaxnzW61tiCCWFBZoWpzJjSId1X\nyR7vHjDaXT1CZl0fHS/5Y9NfK/7jjgkJv7U7wcUxEsy6bMQIzM0nMLZauVmIrsC0\nvu1bhQifEJH1mnoykfnlRVSEe+qGMrEtnOCnos8GTGChmVt4bgogpb5oE4JFm+bs\nufjpRwSC1X5XRv9aqTX/ixIFLCeFpZkYhFLUlZqYHNKRcRlcqz5MLFA6KYdTj9zt\n2ygqd5o26ml7gVHyA+BGE/pzd5m9YTzNvrWbC/ZV6loHM1nHUIBW/Y+hneSWTCkH\nx1LCmTnYxyPz0ZjySbCy03SJPrRewe/xPlxJlCmqLfVh+hEvCHsSw9hnYC3+pvMB\nxIl5HNf34dc/lJsPXo65owsDNcTlKF7gfVG3eKjcNnu1Uh9LzCYG8PKMtougZgV3\nmAviF8MhgWVLXJTo6BXtF605ivViFoyis0bFJCV6uihV+nfAesWVN3rnIeDMh2sV\nEA9zQyxzy2nQsDMJ4eLV5ckrl7YzGsJt+B9jwLXbGkpjQm+bCrds41k9gLjQEiHE\nVm3qGf43D60+Ds\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202208-30\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: GNU Binutils: Multiple Vulnerabilities\n Date: August 14, 2022\n Bugs: #778545, #792342, #829304\n ID: 202208-30\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Binutils, the worst of\nwhich could result in denial of service. \n\nBackground\n=========\nThe GNU Binutils are a collection of tools to create, modify and analyse\nbinary files. Many of the files use BFD, the Binary File Descriptor\nlibrary, to do low-level manipulation. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.38 \u003e= 2.38\n 2 sys-libs/binutils-libs \u003c 2.38 \u003e= 2.38\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in GNU Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.38\"\n\nAll Binutils library users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-libs/binutils-libs-2.38\"\n\nReferences\n=========\n[ 1 ] CVE-2021-3487\n https://nvd.nist.gov/vuln/detail/CVE-2021-3487\n[ 2 ] CVE-2021-3530\n https://nvd.nist.gov/vuln/detail/CVE-2021-3530\n[ 3 ] CVE-2021-3549\n https://nvd.nist.gov/vuln/detail/CVE-2021-3549\n[ 4 ] CVE-2021-20197\n https://nvd.nist.gov/vuln/detail/CVE-2021-20197\n[ 5 ] CVE-2021-20284\n https://nvd.nist.gov/vuln/detail/CVE-2021-20284\n[ 6 ] CVE-2021-20294\n https://nvd.nist.gov/vuln/detail/CVE-2021-20294\n[ 7 ] CVE-2021-45078\n https://nvd.nist.gov/vuln/detail/CVE-2021-45078\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202208-30\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2021-20197" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "VULHUB", "id": "VHN-377873" }, { "db": "VULMON", "id": "CVE-2021-20197" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164821" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168081" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-20197", "trust": 3.0 }, { "db": "PACKETSTORM", "id": "164821", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168081", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-004898", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202102-649", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.3905", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3783", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3660", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4254", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-377873", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-20197", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165296", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377873" }, { "db": "VULMON", "id": "CVE-2021-20197" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164821" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168081" }, { "db": "NVD", "id": "CVE-2021-20197" }, { "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "id": "VAR-202103-0479", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377873" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:51:36.623000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a026945 Red hat Red\u00a0Hat\u00a0Bugzilla", "trust": 0.8, "url": "https://sourceware.org/bugzilla/show_bug.cgi?id=26945" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-20197 log" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-20197" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-59", "trust": 1.1 }, { "problemtype": "Link interpretation problem (CWE-59) [ Other ]", "trust": 0.8 }, { "problemtype": "CWE-362", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377873" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "NVD", "id": "CVE-2021-20197" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202208-30" }, { "trust": 1.7, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1913743" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20210528-0009/" }, { "trust": 1.7, "url": "https://sourceware.org/bugzilla/show_bug.cgi?id=26945" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20197" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.7, "url": "https://access.redhat.com/errata/rhsa-2021:4364" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164821/red-hat-security-advisory-2021-4364-03.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3783" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3660" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4254" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168081/gentoo-linux-security-advisory-202208-30.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/gnu-binutils-read-write-access-via-smart-rename-34500" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-platform-software/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3905" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.3, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.3, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3487" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20284" }, { "trust": 0.1, "url": "https://security.archlinux.org/cve-2021-20197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5137" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45078" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3530" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3549" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20294" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377873" }, { "db": "VULMON", "id": "CVE-2021-20197" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164821" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168081" }, { "db": "NVD", "id": "CVE-2021-20197" }, { "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377873" }, { "db": "VULMON", "id": "CVE-2021-20197" }, { "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164821" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168081" }, { "db": "NVD", "id": "CVE-2021-20197" }, { "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-26T00:00:00", "db": "VULHUB", "id": "VHN-377873" }, { "date": "2021-03-26T00:00:00", "db": "VULMON", "id": "CVE-2021-20197" }, { "date": "2021-12-02T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "date": "2021-12-15T15:27:05", "db": "PACKETSTORM", "id": "165296" }, { "date": "2021-11-10T17:01:56", "db": "PACKETSTORM", "id": "164821" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2022-08-15T16:03:57", "db": "PACKETSTORM", "id": "168081" }, { "date": "2021-03-26T17:15:12.920000", "db": "NVD", "id": "CVE-2021-20197" }, { "date": "2021-02-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-12T00:00:00", "db": "VULHUB", "id": "VHN-377873" }, { "date": "2021-04-01T00:00:00", "db": "VULMON", "id": "CVE-2021-20197" }, { "date": "2021-12-02T01:25:00", "db": "JVNDB", "id": "JVNDB-2021-004898" }, { "date": "2023-02-12T22:15:16.877000", "db": "NVD", "id": "CVE-2021-20197" }, { "date": "2023-03-02T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-649" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-649" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "GNU\u00a0binutils\u00a0 Link interpretation vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-004898" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "post link", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-649" } ], "trust": 0.6 } }
var-202105-1461
Vulnerability from variot
A flaw was found in libwebp in versions before 1.0.1. A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. libwebp Is vulnerable to an out-of-bounds write.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Versions of libwebp prior to 1.0.1 have security vulnerabilities. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7
iOS 14.7 and iPadOS 14.7 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212601.
iOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021
ActionKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A shortcut may be able to bypass Internet permission requirements Description: An input validation issue was addressed with improved input validation. CVE-2021-30763: Zachary Keffaber (@QuickUpdate5)
Audio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30781: tr3e
AVEVideoEncoder Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2021-30748: George Nosenko
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Playing a malicious audio file may lead to an unexpected application termination Description: A logic issue was addressed with improved validation. CVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab
CoreGraphics Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A race condition was addressed with improved state handling. CVE-2021-30786: ryuzaki
CoreText Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of Knownsec 404 team
Crash Reporter Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2021-30774: Yizhuo Wang of Group of Software Security In Progress (G.O.S.S.I.P) at Shanghai Jiao Tong University
CVMS Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video Communications
dyld Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved validation. CVE-2021-30768: Linus Henze (pinauten.de)
Find My Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to access Find My data Description: A permissions issue was addressed with improved validation. CVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2021-30760: Sunglin of Knownsec 404 team
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted tiff file may lead to a denial-of-service or potentially disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: A stack overflow was addressed with improved input validation. CVE-2021-30759: hjy79425575 working with Trend Micro Zero Day Initiative
Identity Service Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass code signing checks Description: An issue in code signature validation was addressed with improved checks. CVE-2021-30773: Linus Henze (pinauten.de)
Image Processing Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30802: Matthew Denton of Google Chrome Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of Trend Micro
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A logic issue was addressed with improved state management. CVE-2021-30769: Linus Henze (pinauten.de)
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker that has already achieved kernel code execution may be able to bypass kernel memory mitigations Description: A logic issue was addressed with improved validation. CVE-2021-30770: Linus Henze (pinauten.de)
libxml2 Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-3518
Measure Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Multiple issues in libwebp Description: Multiple issues were addressed by updating to version 1.2.0. CVE-2018-25010 CVE-2018-25011 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to a denial of service Description: A logic issue was addressed with improved validation. CVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-30792: Anonymous working with Trend Micro Zero Day Initiative
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted file may disclose user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30791: Anonymous working with Trend Micro Zero Day Initiative
TCC Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass certain Privacy preferences Description: A logic issue was addressed with improved state management. CVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-30758: Christoph Guttandin of Media Codings
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30795: Sergei Glazunov of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: This issue was addressed with improved checks. CVE-2021-30797: Ivan Fratric of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30799: Sergei Glazunov of Google Project Zero
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Joining a malicious Wi-Fi network may result in a denial of service or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri
Additional recognition
Assets We would like to acknowledge Cees Elzinga for their assistance.
CoreText We would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for their assistance.
Safari We would like to acknowledge an anonymous researcher for their assistance.
Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.
Installation note:
This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/
iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.
To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "14.7"
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6 jjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47 mxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3 DM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L K0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5 3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM JiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1 FSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl r1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+ Wl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc qmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo jOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\x8e1h -----END PGP SIGNATURE-----
. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
All OpenShift Container Platform 4.6 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or proto payload 1979134 - Placeholder bug for OCP 4.6.0 extras release
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: libwebp security update Advisory ID: RHSA-2021:2260-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:2260 Issue date: 2021-06-07 CVE Names: CVE-2018-25011 CVE-2020-36328 CVE-2020-36329 =====================================================================
- Summary:
An update for libwebp is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The libwebp packages provide a library and tools for the WebP graphics format. WebP is an image format with a lossy compression of digital photographic images. WebP consists of a codec based on the VP8 format, and a container based on the Resource Interchange File Format (RIFF). Webmasters, web developers and browser developers can use WebP to compress, archive, and distribute digital images more efficiently.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
ppc64: libwebp-0.3.0-10.el7_9.ppc.rpm libwebp-0.3.0-10.el7_9.ppc64.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm
ppc64le: libwebp-0.3.0-10.el7_9.ppc64le.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm
s390x: libwebp-0.3.0-10.el7_9.s390.rpm libwebp-0.3.0-10.el7_9.s390x.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: libwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm libwebp-devel-0.3.0-10.el7_9.ppc.rpm libwebp-devel-0.3.0-10.el7_9.ppc64.rpm libwebp-java-0.3.0-10.el7_9.ppc64.rpm libwebp-tools-0.3.0-10.el7_9.ppc64.rpm
ppc64le: libwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm libwebp-devel-0.3.0-10.el7_9.ppc64le.rpm libwebp-java-0.3.0-10.el7_9.ppc64le.rpm libwebp-tools-0.3.0-10.el7_9.ppc64le.rpm
s390x: libwebp-debuginfo-0.3.0-10.el7_9.s390.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm libwebp-devel-0.3.0-10.el7_9.s390.rpm libwebp-devel-0.3.0-10.el7_9.s390x.rpm libwebp-java-0.3.0-10.el7_9.s390x.rpm libwebp-tools-0.3.0-10.el7_9.s390x.rpm
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-25011 https://access.redhat.com/security/cve/CVE-2020-36328 https://access.redhat.com/security/cve/CVE-2020-36329 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYL4OxtzjgjWX9erEAQi1Yw//ZajpWKH7bKTBXifw2DXrc61fOReKCwR9 sQ/djSkMMo+hwhFNtqq9zHDmI81tuOzBRgzA0FzA6qeNZGzsJmNX/RrNgnep9um7 X08Dvb6+5VuHWBrrBv26wV5wGq/t2VKgGXSoJi6CDDDRlLn/RiAJzuZqhdhp3Ijn xBHIDIEYoNTYoDvbvZUVhY1kRKJ2sr3UxjcWPqDCNZdu51Z8ssW5up/Uh3NaY8yv iB7PIoIHrtBD0nGQcy5h4qE47wFbe9RdLTOaqGDAGaOrHWWT56eC72YnCYKMxO4K 8X9EXjhEmmH4a4Pl4dND7D1wiiOQe5kSA8IhYdgHVZQyo9WBJTD6g6C5IERwwjat s3Z7vhzA+/cLEo8+Jc5orRGoLArU5rOl4uqh64AEPaON9UB8bMOnqm24y+Ebyi0B S+zZ2kQ1FGeQIMnrjAer3OUcVnf26e6qNWBK+HCjdfmbhgtZxTtXyOKcM4lSFVcm LY8pLMWzZpcSCpYh15YtRRCWr4bJyX1UD8V3l2Zzek9zmFq5ogVX78KBYV3c4oWn ReVMDEpXb3bYoV/EsMk7WOaDBKM1eU2OjVp2e7r2Fnt8GESxSpZ1pKegkxXdPnmX EmPhXKZNnwh4Z4Aw2AYIsQVo9QTyvCnZjfjAy9WfIqbyg8OTGJOeQqQLlKsq6ddb YXjUcIgJv2g= =kWSg -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 7) - noarch
- Description:
The Qt Image Formats in an add-on module for the core Qt Gui library that provides support for additional image formats including MNG, TGA, TIFF, WBMP, and WebP. 8) - aarch64, ppc64le, s390x, x86_64
3
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1461", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "libwebp", "scope": "lt", "trust": 1.0, "vendor": "webmproject", "version": "1.0.1" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "iphone os", "scope": "eq", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "ipados", "scope": "eq", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "libwebp", "scope": null, "trust": 0.8, "vendor": "the webm", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:webmproject:libwebp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:ipados:14.7:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:14.7:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-36328" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" } ], "trust": 0.6 }, "cve": "CVE-2020-36328", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 7.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-36328", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "VHN-391907", "impactScore": 6.4, "integrityImpact": "PARTIAL", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-36328", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-36328", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202105-1380", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-391907", "trust": 0.1, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2020-36328", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391907" }, { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A flaw was found in libwebp in versions before 1.0.1. A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. libwebp Is vulnerable to an out-of-bounds write.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Versions of libwebp prior to 1.0.1 have security vulnerabilities. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7\n\niOS 14.7 and iPadOS 14.7 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212601. \n\niOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021\n\nActionKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A shortcut may be able to bypass Internet permission\nrequirements\nDescription: An input validation issue was addressed with improved\ninput validation. \nCVE-2021-30763: Zachary Keffaber (@QuickUpdate5)\n\nAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30781: tr3e\n\nAVEVideoEncoder\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30748: George Nosenko\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Playing a malicious audio file may lead to an unexpected\napplication termination\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab\n\nCoreGraphics\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2021-30786: ryuzaki\n\nCoreText\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of\nKnownsec 404 team\n\nCrash Reporter\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30774: Yizhuo Wang of Group of Software Security In\nProgress (G.O.S.S.I.P) at Shanghai Jiao Tong University\n\nCVMS\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video\nCommunications\n\ndyld\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30768: Linus Henze (pinauten.de)\n\nFind My\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to access Find My data\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2021-30760: Sunglin of Knownsec 404 team\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted tiff file may lead to a\ndenial-of-service or potentially disclose memory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: A stack overflow was addressed with improved input\nvalidation. \nCVE-2021-30759: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nIdentity Service\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass code signing\nchecks\nDescription: An issue in code signature validation was addressed with\nimproved checks. \nCVE-2021-30773: Linus Henze (pinauten.de)\n\nImage Processing\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30802: Matthew Denton of Google Chrome Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of\nTrend Micro\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30769: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker that has already achieved kernel code execution\nmay be able to bypass kernel memory mitigations\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30770: Linus Henze (pinauten.de)\n\nlibxml2\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-3518\n\nMeasure\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Multiple issues in libwebp\nDescription: Multiple issues were addressed by updating to version\n1.2.0. \nCVE-2018-25010\nCVE-2018-25011\nCVE-2018-25014\nCVE-2020-36328\nCVE-2020-36329\nCVE-2020-36330\nCVE-2020-36331\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-30792: Anonymous working with Trend Micro Zero Day\nInitiative\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted file may disclose user\ninformation\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30791: Anonymous working with Trend Micro Zero Day\nInitiative\n\nTCC\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30758: Christoph Guttandin of Media Codings\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30795: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30797: Ivan Fratric of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30799: Sergei Glazunov of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Joining a malicious Wi-Fi network may result in a denial of\nservice or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nCoreText\nWe would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for\ntheir assistance. \n\nSafari\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"14.7\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6\njjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47\nmxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3\nDM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L\nK0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5\n3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM\nJiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1\nFSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl\nr1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+\nWl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc\nqmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo\njOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\\x8e1h\n-----END PGP SIGNATURE-----\n\n\n. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nAll OpenShift Container Platform 4.6 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or __proto__ payload\n1979134 - Placeholder bug for OCP 4.6.0 extras release\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: libwebp security update\nAdvisory ID: RHSA-2021:2260-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2260\nIssue date: 2021-06-07\nCVE Names: CVE-2018-25011 CVE-2020-36328 CVE-2020-36329 \n=====================================================================\n\n1. Summary:\n\nAn update for libwebp is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe libwebp packages provide a library and tools for the WebP graphics\nformat. WebP is an image format with a lossy compression of digital\nphotographic images. WebP consists of a codec based on the VP8 format, and\na container based on the Resource Interchange File Format (RIFF). \nWebmasters, web developers and browser developers can use WebP to compress,\narchive, and distribute digital images more efficiently. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nppc64:\nlibwebp-0.3.0-10.el7_9.ppc.rpm\nlibwebp-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm\n\nppc64le:\nlibwebp-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm\n\ns390x:\nlibwebp-0.3.0-10.el7_9.s390.rpm\nlibwebp-0.3.0-10.el7_9.s390x.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-java-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-tools-0.3.0-10.el7_9.ppc64.rpm\n\nppc64le:\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-java-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-tools-0.3.0-10.el7_9.ppc64le.rpm\n\ns390x:\nlibwebp-debuginfo-0.3.0-10.el7_9.s390.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm\nlibwebp-devel-0.3.0-10.el7_9.s390.rpm\nlibwebp-devel-0.3.0-10.el7_9.s390x.rpm\nlibwebp-java-0.3.0-10.el7_9.s390x.rpm\nlibwebp-tools-0.3.0-10.el7_9.s390x.rpm\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25011\nhttps://access.redhat.com/security/cve/CVE-2020-36328\nhttps://access.redhat.com/security/cve/CVE-2020-36329\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYL4OxtzjgjWX9erEAQi1Yw//ZajpWKH7bKTBXifw2DXrc61fOReKCwR9\nsQ/djSkMMo+hwhFNtqq9zHDmI81tuOzBRgzA0FzA6qeNZGzsJmNX/RrNgnep9um7\nX08Dvb6+5VuHWBrrBv26wV5wGq/t2VKgGXSoJi6CDDDRlLn/RiAJzuZqhdhp3Ijn\nxBHIDIEYoNTYoDvbvZUVhY1kRKJ2sr3UxjcWPqDCNZdu51Z8ssW5up/Uh3NaY8yv\niB7PIoIHrtBD0nGQcy5h4qE47wFbe9RdLTOaqGDAGaOrHWWT56eC72YnCYKMxO4K\n8X9EXjhEmmH4a4Pl4dND7D1wiiOQe5kSA8IhYdgHVZQyo9WBJTD6g6C5IERwwjat\ns3Z7vhzA+/cLEo8+Jc5orRGoLArU5rOl4uqh64AEPaON9UB8bMOnqm24y+Ebyi0B\nS+zZ2kQ1FGeQIMnrjAer3OUcVnf26e6qNWBK+HCjdfmbhgtZxTtXyOKcM4lSFVcm\nLY8pLMWzZpcSCpYh15YtRRCWr4bJyX1UD8V3l2Zzek9zmFq5ogVX78KBYV3c4oWn\nReVMDEpXb3bYoV/EsMk7WOaDBKM1eU2OjVp2e7r2Fnt8GESxSpZ1pKegkxXdPnmX\nEmPhXKZNnwh4Z4Aw2AYIsQVo9QTyvCnZjfjAy9WfIqbyg8OTGJOeQqQLlKsq6ddb\nYXjUcIgJv2g=\n=kWSg\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 7) - noarch\n\n3. Description:\n\nThe Qt Image Formats in an add-on module for the core Qt Gui library that\nprovides support for additional image formats including MNG, TGA, TIFF,\nWBMP, and WebP. 8) - aarch64, ppc64le, s390x, x86_64\n\n3", "sources": [ { "db": "NVD", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "VULHUB", "id": "VHN-391907" }, { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-36328", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "163058", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163504", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163028", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162998", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2018-016582", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202105-1380", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163645", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2021090829", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021072216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061420", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060725", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060939", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021071517", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1965", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1880", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1959", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2485.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2388", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2036", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2070", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163061", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "163029", "trust": 0.2 }, { "db": "VULHUB", "id": "VHN-391907", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-36328", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391907" }, { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "id": "VAR-202105-1461", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391907" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T19:28:54.681000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a01956829", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "title": "libwebp Buffer error vulnerability fix", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=151879" }, { "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "CNNVD", "id": "CNNVD-202105-1380" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-787", "trust": 1.1 }, { "problemtype": "Out-of-bounds writing (CWE-787) [NVD Evaluation ]", "trust": 0.8 }, { "problemtype": "CWE-119", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391907" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.9, "url": "https://www.debian.org/security/2021/dsa-4930" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht212601" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956829" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20211112-0001/" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2021/jul/54" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36329" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36328" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25011" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1959" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163028/red-hat-security-advisory-2021-2328-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libwebp-five-vulnerabilities-35580" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1965" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163504/red-hat-security-advisory-2021-2643-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162998/red-hat-security-advisory-2021-2260-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163058/red-hat-security-advisory-2021-2365-01.html" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht212601" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060939" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1880" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061420" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021071517" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2036" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2102" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2388" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2070" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021090829" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/787.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://support.apple.com/ht212601." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30768" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30781" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30780" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30759" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30789" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30775" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30748" }, { "trust": 0.1, "url": "https://www.apple.com/itunes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30763" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30760" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30770" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30769" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7598" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3570" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhba-2021:2641" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7598" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2643" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3570" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3583" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2260" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2328" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2354" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2365" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2364" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391907" }, { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391907" }, { "db": "VULMON", "id": "CVE-2020-36328" }, { "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "db": "NVD", "id": "CVE-2020-36328" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-21T00:00:00", "db": "VULHUB", "id": "VHN-391907" }, { "date": "2021-05-21T00:00:00", "db": "VULMON", "id": "CVE-2020-36328" }, { "date": "2022-01-27T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "date": "2021-07-23T15:29:39", "db": "PACKETSTORM", "id": "163645" }, { "date": "2021-07-14T15:29:37", "db": "PACKETSTORM", "id": "163504" }, { "date": "2021-06-07T13:58:06", "db": "PACKETSTORM", "id": "162998" }, { "date": "2021-06-09T13:21:49", "db": "PACKETSTORM", "id": "163028" }, { "date": "2021-06-09T13:22:14", "db": "PACKETSTORM", "id": "163029" }, { "date": "2021-06-10T13:39:19", "db": "PACKETSTORM", "id": "163058" }, { "date": "2021-06-10T13:42:06", "db": "PACKETSTORM", "id": "163061" }, { "date": "2021-05-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "date": "2021-05-21T17:15:08.270000", "db": "NVD", "id": "CVE-2020-36328" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-391907" }, { "date": "2021-07-23T00:00:00", "db": "VULMON", "id": "CVE-2020-36328" }, { "date": "2022-01-27T09:07:00", "db": "JVNDB", "id": "JVNDB-2018-016582" }, { "date": "2021-11-15T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1380" }, { "date": "2023-01-09T16:41:59.350000", "db": "NVD", "id": "CVE-2020-36328" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1380" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libwebp\u00a0 Out-of-bounds Vulnerability in Microsoft", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016582" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "buffer error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1380" } ], "trust": 0.6 } }
var-201903-0388
Vulnerability from variot
An integer overflow flaw which could lead to an out of bounds write was discovered in libssh2 before 1.8.1 in the way packets are read from the server. A remote attacker who compromises a SSH server may be able to execute code on the client system when a user connects to the server. libssh2 Contains an integer overflow vulnerability.Information is obtained and service operation is interrupted (DoS) There is a possibility of being put into a state. It can execute remote commands and file transfers, and at the same time provide a secure transmission channel for remote programs. An integer overflow vulnerability exists in libssh2. The vulnerability is caused by the '_libssh2_transport_read()' function not properly checking the packet_length value from the server. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2019-9-26-7 Xcode 11.0
Xcode 11.0 addresses the following:
IDE SCM Available for: macOS Mojave 10.14.4 and later Impact: Multiple issues in libssh2 Description: Multiple issues were addressed by updating to version 2.16. CVE-2019-3855: Chris Coulson
ld64 Available for: macOS Mojave 10.14.4 and later Impact: Compiling code without proper input validation could lead to arbitrary code execution with user privilege Description: Multiple issues in ld64 in the Xcode toolchains were addressed by updating to version ld64-507.4. CVE-2019-8721: Pan ZhenPeng of Qihoo 360 Nirvan Team CVE-2019-8722: Pan ZhenPeng of Qihoo 360 Nirvan Team CVE-2019-8723: Pan ZhenPeng of Qihoo 360 Nirvan Team CVE-2019-8724: Pan ZhenPeng of Qihoo 360 Nirvan Team
otool Available for: macOS Mojave 10.14.4 and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2019-8738: Pan ZhenPeng (@Peterpan0927) of Qihoo 360 Nirvan Team CVE-2019-8739: Pan ZhenPeng (@Peterpan0927) of Qihoo 360 Nirvan Team
Installation note:
Xcode 11.0 may be obtained from:
https://developer.apple.com/xcode/downloads/
To check that the Xcode has been updated:
- Select Xcode in the menu bar
- Select About Xcode
-
The version after applying this update will be "11.0". 6) - i386, x86_64
-
Description:
The libssh2 packages provide a library that implements the SSH2 protocol. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: virt:rhel security update Advisory ID: RHSA-2019:1175-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2019:1175 Issue date: 2019-05-14 CVE Names: CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 CVE-2018-20815 CVE-2019-3855 CVE-2019-3856 CVE-2019-3857 CVE-2019-3863 CVE-2019-11091 =====================================================================
- Summary:
An update for the virt:rhel module is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
Kernel-based Virtual Machine (KVM) offers a full virtualization solution for Linux on numerous hardware platforms. The virt:rhel module contains packages which provide user-space components used to run virtual machines using KVM. The packages also provide APIs for managing and interacting with the virtualized systems.
Security Fix(es):
-
A flaw was found in the implementation of the "fill buffer", a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache. If an attacker can generate a load operation that would create a page fault, the execution will continue speculatively with incorrect data from the fill buffer while the data is fetched from higher level caches. This response time can be measured to infer data in the fill buffer. (CVE-2018-12130)
-
Modern Intel microprocessors implement hardware-level micro-optimizations to improve the performance of writing data back to CPU caches. The write operation is split into STA (STore Address) and STD (STore Data) sub-operations. These sub-operations allow the processor to hand-off address generation logic into these sub-operations for optimized writes. Both of these sub-operations write to a shared distributed processor structure called the 'processor store buffer'. As a result, an unprivileged attacker could use this flaw to read private data resident within the CPU's processor store buffer. (CVE-2018-12126)
-
Microprocessors use a ‘load port’ subcomponent to perform load operations from memory or IO. During a load operation, the load port receives data from the memory or IO subsystem and then provides the data to the CPU registers and operations in the CPU’s pipelines. Stale load operations results are stored in the 'load port' table until overwritten by newer operations. Certain load-port operations triggered by an attacker can be used to reveal data about previous stale requests leaking data back to the attacker via a timing side-channel. (CVE-2018-12127)
-
Uncacheable memory on some microprocessors utilizing speculative execution may allow an authenticated user to potentially enable information disclosure via a side channel with local access.
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1646781 - CVE-2018-12126 hardware: Microarchitectural Store Buffer Data Sampling (MSBDS) 1646784 - CVE-2018-12130 hardware: Microarchitectural Fill Buffer Data Sampling (MFBDS) 1667782 - CVE-2018-12127 hardware: Micro-architectural Load Port Data Sampling - Information Leak (MLPDS) 1687303 - CVE-2019-3855 libssh2: Integer overflow in transport read resulting in out of bounds write 1687304 - CVE-2019-3856 libssh2: Integer overflow in keyboard interactive handling resulting in out of bounds write 1687305 - CVE-2019-3857 libssh2: Integer overflow in SSH packet processing channel resulting in out of bounds write 1687313 - CVE-2019-3863 libssh2: Integer overflow in user authenticate keyboard interactive allows out-of-bounds writes 1693101 - CVE-2018-20815 QEMU: device_tree: heap buffer overflow while loading device tree blob 1705312 - CVE-2019-11091 hardware: Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
Source: SLOF-20171214-5.gitfa98132.module+el8.0.0+3075+09be6b65.src.rpm hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.src.rpm libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.src.rpm libguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.src.rpm libiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.src.rpm libssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.src.rpm libvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.src.rpm libvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.src.rpm libvirt-python-4.5.0-1.module+el8.0.0+3075+09be6b65.src.rpm nbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.src.rpm netcf-0.2.8-10.module+el8.0.0+3075+09be6b65.src.rpm perl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.src.rpm qemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.src.rpm seabios-1.11.1-3.module+el8.0.0+3075+09be6b65.src.rpm sgabios-0.20170427git-2.module+el8.0.0+3075+09be6b65.src.rpm supermin-5.1.19-8.module+el8.0.0+3075+09be6b65.src.rpm
aarch64: hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm hivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm hivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-benchmarking-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-benchmarking-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm libguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm libssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm libssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm libssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm libvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm libvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm libvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm libvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm libvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm lua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm lua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm nbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm netcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm perl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.aarch64.rpm python3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.aarch64.rpm qemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm qemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm ruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm ruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm ruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm ruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm supermin-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm supermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm supermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm supermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm virt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm virt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm
noarch: SLOF-20171214-5.gitfa98132.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-bash-completion-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-inspect-icons-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-javadoc-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-man-pages-ja-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-man-pages-uk-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm libguestfs-tools-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm nbdkit-bash-completion-1.4.2-4.module+el8.0.0+3075+09be6b65.noarch.rpm seabios-bin-1.11.1-3.module+el8.0.0+3075+09be6b65.noarch.rpm seavgabios-bin-1.11.1-3.module+el8.0.0+3075+09be6b65.noarch.rpm sgabios-bin-0.20170427git-2.module+el8.0.0+3075+09be6b65.noarch.rpm
ppc64le: hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm hivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm hivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm libguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm libssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm libssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm libssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm libvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm libvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm libvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm libvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm libvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm lua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm lua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm nbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm netcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm perl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.ppc64le.rpm python3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.ppc64le.rpm qemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm qemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm ruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm ruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm ruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm ruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm supermin-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm supermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm supermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm supermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm virt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm virt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm
s390x: hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm hivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm hivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm libguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm libssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm libssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm libssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm libvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm libvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm libvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm libvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm libvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm lua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm lua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm nbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm netcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm perl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm perl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm perl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm perl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm perl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm perl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm perl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm python3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm python3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm python3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm python3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm python3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.s390x.rpm python3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.s390x.rpm qemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm qemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm ruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm ruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm ruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm ruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm supermin-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm supermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm supermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm supermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm virt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm virt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm
x86_64: hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm hivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm hivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-benchmarking-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-benchmarking-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm libguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm libssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm libssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm libssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm libvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm libvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm libvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm libvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm libvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm lua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm lua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-vddk-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-vddk-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm nbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm netcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm perl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.x86_64.rpm python3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.x86_64.rpm qemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-gluster-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-gluster-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm qemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm ruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm ruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm ruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm ruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm seabios-1.11.1-3.module+el8.0.0+3075+09be6b65.x86_64.rpm sgabios-0.20170427git-2.module+el8.0.0+3075+09be6b65.x86_64.rpm supermin-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm supermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm supermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm supermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm virt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm virt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm virt-p2v-maker-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm virt-v2v-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm virt-v2v-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-12126 https://access.redhat.com/security/cve/CVE-2018-12127 https://access.redhat.com/security/cve/CVE-2018-12130 https://access.redhat.com/security/cve/CVE-2018-20815 https://access.redhat.com/security/cve/CVE-2019-3855 https://access.redhat.com/security/cve/CVE-2019-3856 https://access.redhat.com/security/cve/CVE-2019-3857 https://access.redhat.com/security/cve/CVE-2019-3863 https://access.redhat.com/security/cve/CVE-2019-11091 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2019 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXNsFdNzjgjWX9erEAQjf/g/+IPQ7NKuK24reC2hW29G51Nno6oF2bwsO yNTBaVjP5U1cRHhDrvv3V+Pao8Pj4sB3BRJHYgO8KHMj1uJmP72AdAzaPPkJxoDh 42FOaNLkfQkguzreRN+ty+jHaVUumvuqf9HViVrJyvR+cfvV2tF8poGmKoWrEK5s rSOkvp3haP0HzwVN9wSnrlFGU/DrsLyg80+BuJb878ecSPRHiy/6ZuLd/nkO8fnO VKvDlTKEHAOwZWPmBTduGwOPe4J3fB+9chgK6ZcZpnh+lPSonkIfTXA1svbD8Un/ FsC3wxDdHA9wRkwZZquRgaAeDWwYtKe7nMWSiR6USTWAkh8gruf53eW6//A6999Q oI4wHzKQjJbYH9Pvc3AlQj+5nemvnfyBF/V0UijTHbRBxtJvnIsdro2bpgYsF3Mu JD6kMP7l5D51eQ3tNxDdeB49YNctPF0HuGbw7x0CojBhlQW7k10Ul3/LtqEu2Av4 TqAJP3ENBC1C7VT1zGUSfc8neNNQxJzV9Co08w61bNtd4fo29uv0fOvDy+1J+7CT fOzF2slJTOJ/cqwcaR8j/SjKSFUIrHBKEPYWfVybmKLJhfQCmUzWE7sHZJ+9jKkb LDT+GUF9+TE7CNkD95vBlgs8kG3R76ZG5NSxjI1GDOLNNuhqH3/RZh3KNE17ut/r M5otU3RxBZs= =634V -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . 7.3) - x86_64
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-4431-1 security@debian.org https://www.debian.org/security/ Salvatore Bonaccorso April 13, 2019 https://www.debian.org/security/faq
Package : libssh2 CVE ID : CVE-2019-3855 CVE-2019-3856 CVE-2019-3857 CVE-2019-3858 CVE-2019-3859 CVE-2019-3860 CVE-2019-3861 CVE-2019-3862 CVE-2019-3863 Debian Bug : 924965
Chris Coulson discovered several vulnerabilities in libssh2, a SSH2 client-side library, which could result in denial of service, information leaks or the execution of arbitrary code.
For the stable distribution (stretch), these problems have been fixed in version 1.7.0-1+deb9u1.
We recommend that you upgrade your libssh2 packages
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201903-0388", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "libssh2", "scope": "lt", "trust": 1.8, "vendor": "libssh2", "version": "1.8.1" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "29" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise linux server aus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.6" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.56" }, { "model": "enterprise linux desktop", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "enterprise linux workstation", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "enterprise linux server tus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.6" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "8.0" }, { "model": "leap", "scope": "eq", "trust": 1.0, "vendor": "opensuse", "version": "42.3" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "28" }, { "model": "xcode", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "30" }, { "model": "enterprise linux server", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "enterprise linux server eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.6" }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "fedora", "scope": "eq", "trust": 0.8, "vendor": "fedora", "version": "29" }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "enterprise linux desktop", "scope": null, "trust": 0.8, "vendor": "red hat", "version": null }, { "model": "enterprise linux server", "scope": "eq", "trust": 0.8, "vendor": "red hat", "version": "none" }, { "model": "enterprise linux server", "scope": "eq", "trust": 0.8, "vendor": "red hat", "version": "aus" }, { "model": "enterprise linux server", "scope": "eq", "trust": 0.8, "vendor": "red hat", "version": "eus" }, { "model": "enterprise linux server", "scope": "eq", "trust": 0.8, "vendor": "red hat", "version": "tus" }, { "model": "enterprise linux workstation", "scope": null, "trust": 0.8, "vendor": "red hat", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:libssh2:libssh2:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.8.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:28:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:29:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_desktop:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_workstation:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:7.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_eus:7.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:7.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:advanced_virtualization:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:opensuse:leap:42.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:apple:xcode:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.56:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-3855" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Chris Coulson of Canonical Ltd.,Slackware Security Team", "sources": [ { "db": "CNNVD", "id": "CNNVD-201903-634" } ], "trust": 0.6 }, "cve": "CVE-2019-3855", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 9.3, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 8.6, "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Complete", "baseScore": 9.3, "confidentialityImpact": "Complete", "exploitabilityScore": null, "id": "CVE-2019-3855", "impactScore": null, "integrityImpact": "Complete", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "COMPLETE", "baseScore": 9.3, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 8.6, "id": "VHN-155290", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "secalert@redhat.com", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.6, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.0" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 8.8, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2019-3855", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-3855", "trust": 1.8, "value": "HIGH" }, { "author": "secalert@redhat.com", "id": "CVE-2019-3855", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-201903-634", "trust": 0.6, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-155290", "trust": 0.1, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2019-3855", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-155290" }, { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "CNNVD", "id": "CNNVD-201903-634" }, { "db": "NVD", "id": "CVE-2019-3855" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An integer overflow flaw which could lead to an out of bounds write was discovered in libssh2 before 1.8.1 in the way packets are read from the server. A remote attacker who compromises a SSH server may be able to execute code on the client system when a user connects to the server. libssh2 Contains an integer overflow vulnerability.Information is obtained and service operation is interrupted (DoS) There is a possibility of being put into a state. It can execute remote commands and file transfers, and at the same time provide a secure transmission channel for remote programs. An integer overflow vulnerability exists in libssh2. The vulnerability is caused by the \u0027_libssh2_transport_read()\u0027 function not properly checking the packet_length value from the server. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-9-26-7 Xcode 11.0\n\nXcode 11.0 addresses the following:\n\nIDE SCM\nAvailable for: macOS Mojave 10.14.4 and later\nImpact: Multiple issues in libssh2\nDescription: Multiple issues were addressed by updating to version\n2.16. \nCVE-2019-3855: Chris Coulson\n\nld64\nAvailable for: macOS Mojave 10.14.4 and later\nImpact: Compiling code without proper input validation could lead to\narbitrary code execution with user privilege\nDescription: Multiple issues in ld64 in the Xcode toolchains were\naddressed by updating to version ld64-507.4. \nCVE-2019-8721: Pan ZhenPeng of Qihoo 360 Nirvan Team\nCVE-2019-8722: Pan ZhenPeng of Qihoo 360 Nirvan Team\nCVE-2019-8723: Pan ZhenPeng of Qihoo 360 Nirvan Team\nCVE-2019-8724: Pan ZhenPeng of Qihoo 360 Nirvan Team\n\notool\nAvailable for: macOS Mojave 10.14.4 and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2019-8738: Pan ZhenPeng (@Peterpan0927) of Qihoo 360 Nirvan Team\nCVE-2019-8739: Pan ZhenPeng (@Peterpan0927) of Qihoo 360 Nirvan Team\n\nInstallation note:\n\nXcode 11.0 may be obtained from:\n\nhttps://developer.apple.com/xcode/downloads/\n\nTo check that the Xcode has been updated:\n\n* Select Xcode in the menu bar\n* Select About Xcode\n* The version after applying this update will be \"11.0\". 6) - i386, x86_64\n\n3. Description:\n\nThe libssh2 packages provide a library that implements the SSH2 protocol. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: virt:rhel security update\nAdvisory ID: RHSA-2019:1175-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2019:1175\nIssue date: 2019-05-14\nCVE Names: CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 \n CVE-2018-20815 CVE-2019-3855 CVE-2019-3856 \n CVE-2019-3857 CVE-2019-3863 CVE-2019-11091 \n=====================================================================\n\n1. Summary:\n\nAn update for the virt:rhel module is now available for Red Hat Enterprise\nLinux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nKernel-based Virtual Machine (KVM) offers a full virtualization solution\nfor Linux on numerous hardware platforms. The virt:rhel module contains\npackages which provide user-space components used to run virtual machines\nusing KVM. The packages also provide APIs for managing and interacting with\nthe virtualized systems. \n\nSecurity Fix(es):\n\n* A flaw was found in the implementation of the \"fill buffer\", a mechanism\nused by modern CPUs when a cache-miss is made on L1 CPU cache. If an\nattacker can generate a load operation that would create a page fault, the\nexecution will continue speculatively with incorrect data from the fill\nbuffer while the data is fetched from higher level caches. This response\ntime can be measured to infer data in the fill buffer. (CVE-2018-12130)\n\n* Modern Intel microprocessors implement hardware-level micro-optimizations\nto improve the performance of writing data back to CPU caches. The write\noperation is split into STA (STore Address) and STD (STore Data)\nsub-operations. These sub-operations allow the processor to hand-off\naddress generation logic into these sub-operations for optimized writes. \nBoth of these sub-operations write to a shared distributed processor\nstructure called the \u0027processor store buffer\u0027. As a result, an\nunprivileged attacker could use this flaw to read private data resident\nwithin the CPU\u0027s processor store buffer. (CVE-2018-12126)\n\n* Microprocessors use a \u2018load port\u2019 subcomponent to perform load operations\nfrom memory or IO. During a load operation, the load port receives data\nfrom the memory or IO subsystem and then provides the data to the CPU\nregisters and operations in the CPU\u2019s pipelines. Stale load operations\nresults are stored in the \u0027load port\u0027 table until overwritten by newer\noperations. Certain load-port operations triggered by an attacker can be\nused to reveal data about previous stale requests leaking data back to the\nattacker via a timing side-channel. (CVE-2018-12127)\n\n* Uncacheable memory on some microprocessors utilizing speculative\nexecution may allow an authenticated user to potentially enable information\ndisclosure via a side channel with local access. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1646781 - CVE-2018-12126 hardware: Microarchitectural Store Buffer Data Sampling (MSBDS)\n1646784 - CVE-2018-12130 hardware: Microarchitectural Fill Buffer Data Sampling (MFBDS)\n1667782 - CVE-2018-12127 hardware: Micro-architectural Load Port Data Sampling - Information Leak (MLPDS)\n1687303 - CVE-2019-3855 libssh2: Integer overflow in transport read resulting in out of bounds write\n1687304 - CVE-2019-3856 libssh2: Integer overflow in keyboard interactive handling resulting in out of bounds write\n1687305 - CVE-2019-3857 libssh2: Integer overflow in SSH packet processing channel resulting in out of bounds write\n1687313 - CVE-2019-3863 libssh2: Integer overflow in user authenticate keyboard interactive allows out-of-bounds writes\n1693101 - CVE-2018-20815 QEMU: device_tree: heap buffer overflow while loading device tree blob\n1705312 - CVE-2019-11091 hardware: Microarchitectural Data Sampling Uncacheable Memory (MDSUM)\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nSLOF-20171214-5.gitfa98132.module+el8.0.0+3075+09be6b65.src.rpm\nhivex-1.3.15-6.module+el8.0.0+3075+09be6b65.src.rpm\nlibguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.src.rpm\nlibguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.src.rpm\nlibiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.src.rpm\nlibssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.src.rpm\nlibvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.src.rpm\nlibvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.src.rpm\nlibvirt-python-4.5.0-1.module+el8.0.0+3075+09be6b65.src.rpm\nnbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.src.rpm\nnetcf-0.2.8-10.module+el8.0.0+3075+09be6b65.src.rpm\nperl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.src.rpm\nqemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.src.rpm\nseabios-1.11.1-3.module+el8.0.0+3075+09be6b65.src.rpm\nsgabios-0.20170427git-2.module+el8.0.0+3075+09be6b65.src.rpm\nsupermin-5.1.19-8.module+el8.0.0+3075+09be6b65.src.rpm\n\naarch64:\nhivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nhivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nhivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nhivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-benchmarking-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-benchmarking-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm\nlibssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm\nlibssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.aarch64.rpm\nlibvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlibvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlibvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.aarch64.rpm\nlua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nlua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nnetcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nperl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.aarch64.rpm\npython3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.aarch64.rpm\nqemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nqemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.aarch64.rpm\nruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.aarch64.rpm\nruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nsupermin-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm\nsupermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm\nsupermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm\nsupermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.aarch64.rpm\nvirt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\nvirt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.aarch64.rpm\n\nnoarch:\nSLOF-20171214-5.gitfa98132.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-bash-completion-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-inspect-icons-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-javadoc-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-man-pages-ja-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-man-pages-uk-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nlibguestfs-tools-1.38.4-10.module+el8.0.0+3075+09be6b65.noarch.rpm\nnbdkit-bash-completion-1.4.2-4.module+el8.0.0+3075+09be6b65.noarch.rpm\nseabios-bin-1.11.1-3.module+el8.0.0+3075+09be6b65.noarch.rpm\nseavgabios-bin-1.11.1-3.module+el8.0.0+3075+09be6b65.noarch.rpm\nsgabios-bin-0.20170427git-2.module+el8.0.0+3075+09be6b65.noarch.rpm\n\nppc64le:\nhivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nhivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nhivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nhivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm\nlibssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm\nlibssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.ppc64le.rpm\nlibvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlibvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlibvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.ppc64le.rpm\nlua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nlua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nnetcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nperl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.ppc64le.rpm\npython3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nqemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nqemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.ppc64le.rpm\nruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nsupermin-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nsupermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nsupermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nsupermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nvirt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\nvirt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.ppc64le.rpm\n\ns390x:\nhivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nhivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nhivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nhivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm\nlibssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm\nlibssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.s390x.rpm\nlibvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.s390x.rpm\nlibvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlibvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.s390x.rpm\nlua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nlua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nnetcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nperl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.s390x.rpm\npython3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.s390x.rpm\nqemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nqemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.s390x.rpm\nruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.s390x.rpm\nruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nsupermin-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm\nsupermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm\nsupermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm\nsupermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.s390x.rpm\nvirt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\nvirt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.s390x.rpm\n\nx86_64:\nhivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nhivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nhivex-debugsource-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nhivex-devel-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-benchmarking-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-benchmarking-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-debugsource-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-gfs2-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-gobject-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-gobject-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-gobject-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-java-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-java-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-java-devel-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-rescue-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-rsync-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-tools-c-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-tools-c-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-winsupport-8.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibguestfs-xfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-debugsource-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-devel-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-utils-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibiscsi-utils-debuginfo-1.18.0-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibssh2-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm\nlibssh2-debuginfo-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm\nlibssh2-debugsource-1.8.0-7.module+el8.0.0+3075+09be6b65.1.x86_64.rpm\nlibvirt-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-admin-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-admin-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-bash-completion-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-client-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-client-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-config-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-config-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-interface-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-interface-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-network-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-network-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-nodedev-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-nodedev-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-nwfilter-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-nwfilter-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-qemu-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-qemu-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-secret-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-secret-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-core-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-core-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-disk-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-disk-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-gluster-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-gluster-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-iscsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-iscsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-logical-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-logical-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-mpath-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-mpath-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-rbd-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-rbd-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-scsi-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-driver-storage-scsi-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-daemon-kvm-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-dbus-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibvirt-dbus-debuginfo-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibvirt-dbus-debugsource-1.2.0-2.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlibvirt-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-debugsource-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-devel-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-docs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-libs-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-libs-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-lock-sanlock-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-lock-sanlock-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-nss-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlibvirt-nss-debuginfo-4.5.0-23.1.module+el8.0.0+3151+3ba813f9.x86_64.rpm\nlua-guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nlua-guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-basic-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-basic-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-debugsource-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-devel-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-example-plugins-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-example-plugins-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-gzip-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-gzip-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-python-common-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-python3-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-python3-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-vddk-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-vddk-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-xz-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnbdkit-plugin-xz-debuginfo-1.4.2-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-debugsource-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-devel-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-libs-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nnetcf-libs-debuginfo-0.2.8-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-Sys-Guestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-Sys-Guestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-Sys-Virt-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-Sys-Virt-debuginfo-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-Sys-Virt-debugsource-4.5.0-4.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nperl-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-libvirt-4.5.0-1.module+el8.0.0+3075+09be6b65.x86_64.rpm\npython3-libvirt-debuginfo-4.5.0-1.module+el8.0.0+3075+09be6b65.x86_64.rpm\nqemu-guest-agent-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-guest-agent-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-img-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-img-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-curl-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-curl-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-gluster-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-gluster-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-iscsi-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-iscsi-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-rbd-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-rbd-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-ssh-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-block-ssh-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-common-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-common-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-core-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-core-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-debuginfo-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nqemu-kvm-debugsource-2.12.0-64.module+el8.0.0+3180+d6a3561d.2.x86_64.rpm\nruby-hivex-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nruby-hivex-debuginfo-1.3.15-6.module+el8.0.0+3075+09be6b65.x86_64.rpm\nruby-libguestfs-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nruby-libguestfs-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nseabios-1.11.1-3.module+el8.0.0+3075+09be6b65.x86_64.rpm\nsgabios-0.20170427git-2.module+el8.0.0+3075+09be6b65.x86_64.rpm\nsupermin-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm\nsupermin-debuginfo-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm\nsupermin-debugsource-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm\nsupermin-devel-5.1.19-8.module+el8.0.0+3075+09be6b65.x86_64.rpm\nvirt-dib-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nvirt-dib-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nvirt-p2v-maker-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nvirt-v2v-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\nvirt-v2v-debuginfo-1.38.4-10.module+el8.0.0+3075+09be6b65.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-12126\nhttps://access.redhat.com/security/cve/CVE-2018-12127\nhttps://access.redhat.com/security/cve/CVE-2018-12130\nhttps://access.redhat.com/security/cve/CVE-2018-20815\nhttps://access.redhat.com/security/cve/CVE-2019-3855\nhttps://access.redhat.com/security/cve/CVE-2019-3856\nhttps://access.redhat.com/security/cve/CVE-2019-3857\nhttps://access.redhat.com/security/cve/CVE-2019-3863\nhttps://access.redhat.com/security/cve/CVE-2019-11091\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2019 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXNsFdNzjgjWX9erEAQjf/g/+IPQ7NKuK24reC2hW29G51Nno6oF2bwsO\nyNTBaVjP5U1cRHhDrvv3V+Pao8Pj4sB3BRJHYgO8KHMj1uJmP72AdAzaPPkJxoDh\n42FOaNLkfQkguzreRN+ty+jHaVUumvuqf9HViVrJyvR+cfvV2tF8poGmKoWrEK5s\nrSOkvp3haP0HzwVN9wSnrlFGU/DrsLyg80+BuJb878ecSPRHiy/6ZuLd/nkO8fnO\nVKvDlTKEHAOwZWPmBTduGwOPe4J3fB+9chgK6ZcZpnh+lPSonkIfTXA1svbD8Un/\nFsC3wxDdHA9wRkwZZquRgaAeDWwYtKe7nMWSiR6USTWAkh8gruf53eW6//A6999Q\noI4wHzKQjJbYH9Pvc3AlQj+5nemvnfyBF/V0UijTHbRBxtJvnIsdro2bpgYsF3Mu\nJD6kMP7l5D51eQ3tNxDdeB49YNctPF0HuGbw7x0CojBhlQW7k10Ul3/LtqEu2Av4\nTqAJP3ENBC1C7VT1zGUSfc8neNNQxJzV9Co08w61bNtd4fo29uv0fOvDy+1J+7CT\nfOzF2slJTOJ/cqwcaR8j/SjKSFUIrHBKEPYWfVybmKLJhfQCmUzWE7sHZJ+9jKkb\nLDT+GUF9+TE7CNkD95vBlgs8kG3R76ZG5NSxjI1GDOLNNuhqH3/RZh3KNE17ut/r\nM5otU3RxBZs=\n=634V\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. 7.3) - x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4431-1 security@debian.org\nhttps://www.debian.org/security/ Salvatore Bonaccorso\nApril 13, 2019 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : libssh2\nCVE ID : CVE-2019-3855 CVE-2019-3856 CVE-2019-3857 CVE-2019-3858\n CVE-2019-3859 CVE-2019-3860 CVE-2019-3861 CVE-2019-3862\n CVE-2019-3863\nDebian Bug : 924965\n\nChris Coulson discovered several vulnerabilities in libssh2, a SSH2\nclient-side library, which could result in denial of service,\ninformation leaks or the execution of arbitrary code. \n\nFor the stable distribution (stretch), these problems have been fixed in\nversion 1.7.0-1+deb9u1. \n\nWe recommend that you upgrade your libssh2 packages", "sources": [ { "db": "NVD", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "VULHUB", "id": "VHN-155290" }, { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "PACKETSTORM", "id": "154655" }, { "db": "PACKETSTORM", "id": "153510" }, { "db": "PACKETSTORM", "id": "152874" }, { "db": "PACKETSTORM", "id": "153969" }, { "db": "PACKETSTORM", "id": "153654" }, { "db": "PACKETSTORM", "id": "153811" }, { "db": "PACKETSTORM", "id": "152509" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-3855", "trust": 3.3 }, { "db": "PACKETSTORM", "id": "152136", "trust": 1.8 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2019/03/18/3", "trust": 1.8 }, { "db": "BID", "id": "107485", "trust": 1.8 }, { "db": "JVNDB", "id": "JVNDB-2019-002832", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-201903-634", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2019.4341", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.2340", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4083", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.1274", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.4479.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0911", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.4226", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0996", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0894", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "152509", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "153654", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "154655", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "153510", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "153969", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "153811", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "152282", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-155290", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2019-3855", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "152874", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-155290" }, { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "PACKETSTORM", "id": "154655" }, { "db": "PACKETSTORM", "id": "153510" }, { "db": "PACKETSTORM", "id": "152874" }, { "db": "PACKETSTORM", "id": "153969" }, { "db": "PACKETSTORM", "id": "153654" }, { "db": "PACKETSTORM", "id": "153811" }, { "db": "PACKETSTORM", "id": "152509" }, { "db": "CNNVD", "id": "CNNVD-201903-634" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "id": "VAR-201903-0388", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-155290" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T21:20:42.429000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "[SECURITY] [DLA 1730-1] libssh2 security update", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2019/03/msg00032.html" }, { "title": "DSA-4431", "trust": 0.8, "url": "https://www.debian.org/security/2019/dsa-4431" }, { "title": "FEDORA-2019-f31c14682f", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xcwea5zclkrduk62qvvymfwlwkopx3lo/" }, { "title": "Possible integer overflow in transport read allows out-of-bounds write", "trust": 0.8, "url": "https://www.libssh2.org/cve-2019-3855.html" }, { "title": "NTAP-20190327-0005", "trust": 0.8, "url": "https://security.netapp.com/advisory/ntap-20190327-0005/" }, { "title": "Bug 1687303", "trust": 0.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-3855" }, { "title": "RHSA-2019:0679", "trust": 0.8, "url": "https://access.redhat.com/errata/rhsa-2019:0679" }, { "title": "libssh2 Fixes for digital error vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=90196" }, { "title": "Red Hat: Important: libssh2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20191652 - security advisory" }, { "title": "Red Hat: Important: libssh2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20191791 - security advisory" }, { "title": "Red Hat: Important: libssh2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20192399 - security advisory" }, { "title": "Red Hat: Important: libssh2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20190679 - security advisory" }, { "title": "Red Hat: Important: libssh2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20191943 - security advisory" }, { "title": "Debian CVElist Bug Report Logs: libssh2: CVE-2019-13115", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=fae8ca9a607a0d36a41864075e4d1739" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2019-3855" }, { "title": "Red Hat: Important: virt:rhel security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20191175 - security advisory" }, { "title": "Amazon Linux AMI: ALAS-2019-1254", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2019-1254" }, { "title": "Amazon Linux 2: ALAS2-2019-1199", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2019-1199" }, { "title": "IBM: IBM Security Bulletin: IBM has announced a release for IBM Security Identity Governance and Intelligence in response to multiple security vulnerabilities (CVE-2019-3855, CVE-2019-3856, CVE-2019-3857, CVE-2019-3863)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=90ea192442f00a544f31c35e3585a0e6" }, { "title": "Debian CVElist Bug Report Logs: libssh2: CVE-2019-3855 CVE-2019-3856 CVE-2019-3857 CVE-2019-3858 CVE-2019-3859 CVE-2019-3860 CVE-2019-3861 CVE-2019-3862 CVE-2019-3863", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=00191547a456d0cf5c7b101c1774a050" }, { "title": "Debian Security Advisories: DSA-4431-1 libssh2 -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=32e9048e9588619b2dfacda6369a23ee" }, { "title": "IBM: IBM Security Bulletin: IBM QRadar Network Security is affected by multiple libssh2 vulnerabilities (CVE-2019-3863, CVE-2019-3857, CVE-2019-3856, CVE-2019-3855)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=55b92934c6d6315aa40e8be4ce2a8bf4" }, { "title": "IBM: IBM Security Bulletin: Vulnerabiliies in libssh2 affect PowerKVM", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=6e0e5e527a9204c06a52ef667608c6e8" }, { "title": "Arch Linux Advisories: [ASA-201903-13] libssh2: multiple issues", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-201903-13" }, { "title": "Oracle VM Server for x86 Bulletins: Oracle VM Server for x86 Bulletin - July 2019", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=oracle_vm_server_for_x86_bulletins\u0026qid=b76ca4c2e9a0948d77d969fddc7b121b" }, { "title": "Oracle Linux Bulletins: Oracle Linux Bulletin - April 2019", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=oracle_linux_bulletins\u0026qid=0cf12ffad0c479958deb0741d0970b4e" }, { "title": "Oracle Linux Bulletins: Oracle Linux Bulletin - July 2019", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=oracle_linux_bulletins\u0026qid=767e8ff3a913d6c9b177c63c24420933" }, { "title": "IBM: IBM Security Bulletin: Vyatta 5600 vRouter Software Patches \u2013 Release 1801-z", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=4ef3e54cc5cdc194f0526779f9480f89" }, { "title": "Fortinet Security Advisories: libssh2 integer overflow and out of bounds read/write vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=fortinet_security_advisories\u0026qid=fg-ir-19-099" }, { "title": "IBM: IBM Security Bulletin: Multiple Security vulnerabilities have been fixed in the IBM Security Access Manager Appliance", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1519a5f830589c3bab8a20f4163374ae" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "TrivyWeb", "trust": 0.1, "url": "https://github.com/korayagaya/trivyweb " }, { "title": "github_aquasecurity_trivy", "trust": 0.1, "url": "https://github.com/back8/github_aquasecurity_trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/simiyo/trivy " }, { "title": "security", "trust": 0.1, "url": "https://github.com/umahari/security " }, { "title": "", "trust": 0.1, "url": "https://github.com/mohzeela/external-secret " }, { "title": "Vulnerability-Scanner-for-Containers", "trust": 0.1, "url": "https://github.com/t31m0/vulnerability-scanner-for-containers " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/siddharthraopotukuchi/trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/aquasecurity/trivy " }, { "title": "trivy", "trust": 0.1, "url": "https://github.com/knqyf263/trivy " }, { "title": "PoC-in-GitHub", "trust": 0.1, "url": "https://github.com/developer3000s/poc-in-github " }, { "title": "CVE-POC", "trust": 0.1, "url": "https://github.com/0xt11/cve-poc " }, { "title": "PoC-in-GitHub", "trust": 0.1, "url": "https://github.com/nomi-sec/poc-in-github " }, { "title": "PoC-in-GitHub", "trust": 0.1, "url": "https://github.com/hectorgie/poc-in-github " } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "CNNVD", "id": "CNNVD-201903-634" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-190", "trust": 1.9 }, { "problemtype": "CWE-787", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-155290" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 3.0, "url": "http://packetstormsecurity.com/files/152136/slackware-security-advisory-libssh2-updates.html" }, { "trust": 2.4, "url": "http://www.securityfocus.com/bid/107485" }, { "trust": 2.4, "url": "https://www.debian.org/security/2019/dsa-4431" }, { "trust": 2.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3855" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:1175" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:1652" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:1791" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:1943" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:2399" }, { "trust": 1.8, "url": "https://seclists.org/bugtraq/2019/mar/25" }, { "trust": 1.8, "url": "https://seclists.org/bugtraq/2019/apr/25" }, { "trust": 1.8, "url": "https://seclists.org/bugtraq/2019/sep/49" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=cve-2019-3855" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20190327-0005/" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht210609" }, { "trust": 1.8, "url": "https://www.broadcom.com/support/fibre-channel-networking/security-advisories/brocade-security-advisory-2019-767" }, { "trust": 1.8, "url": "http://seclists.org/fulldisclosure/2019/sep/42" }, { "trust": 1.8, "url": "https://www.libssh2.org/cve-2019-3855.html" }, { "trust": 1.8, "url": "https://www.oracle.com/technetwork/security-advisory/cpuoct2019-5072832.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2019/03/msg00032.html" }, { "trust": 1.8, "url": "http://www.openwall.com/lists/oss-security/2019/03/18/3" }, { "trust": 1.8, "url": "https://access.redhat.com/errata/rhsa-2019:0679" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-03/msg00040.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-04/msg00003.html" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xcwea5zclkrduk62qvvymfwlwkopx3lo/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5dk6vo2ceutajfyikwnzkekymyr3no2o/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6lunhpw64igcasz4jq2j5kdxnzn53dww/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/m7if3lnhoa75o4wzwihjlirma5ljued3/" }, { "trust": 0.8, "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-3855\\" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5dk6vo2ceutajfyikwnzkekymyr3no2o/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/m7if3lnhoa75o4wzwihjlirma5ljued3/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/6lunhpw64igcasz4jq2j5kdxnzn53dww/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xcwea5zclkrduk62qvvymfwlwkopx3lo/" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3856" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3857" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3863" }, { "trust": 0.6, "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20190655-1.html" }, { "trust": 0.6, "url": "https://fortiguard.com/psirt/fg-ir-19-099" }, { "trust": 0.6, "url": "https://lists.debian.org/debian-lts-announce/2019/01/msg00028.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115655" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115643" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115649" }, { "trust": 0.6, "url": "https://www.suse.com/support/update/announcement/2019/suse-su-201913982-1.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6520674" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libssh2-multiple-vulnerabilities-28768" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/77838" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1120209" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht210609" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1116357" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.2340/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.4226/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170634" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/79010" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4341/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/77478" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/77406" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4479.2/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integrated-management-module-ii-imm2-is-affected-by-multiple-vulnerabilities-in-libssh2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4083" }, { "trust": 0.5, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-3863" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-3857" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-3856" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-3855" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/787.html" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/190.html" }, { "trust": 0.1, "url": "https://tools.cisco.com/security/center/viewalert.x?alertid=59797" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://github.com/korayagaya/trivyweb" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8723" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8738" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://developer.apple.com/xcode/downloads/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8722" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8721" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8739" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11091" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-12126" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-12127" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-12126" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-11091" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-12130" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20815" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-12127" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-12130" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/libssh2" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3859" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3860" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3861" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3862" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3858" } ], "sources": [ { "db": "VULHUB", "id": "VHN-155290" }, { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "PACKETSTORM", "id": "154655" }, { "db": "PACKETSTORM", "id": "153510" }, { "db": "PACKETSTORM", "id": "152874" }, { "db": "PACKETSTORM", "id": "153969" }, { "db": "PACKETSTORM", "id": "153654" }, { "db": "PACKETSTORM", "id": "153811" }, { "db": "PACKETSTORM", "id": "152509" }, { "db": "CNNVD", "id": "CNNVD-201903-634" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-155290" }, { "db": "VULMON", "id": "CVE-2019-3855" }, { "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "db": "PACKETSTORM", "id": "154655" }, { "db": "PACKETSTORM", "id": "153510" }, { "db": "PACKETSTORM", "id": "152874" }, { "db": "PACKETSTORM", "id": "153969" }, { "db": "PACKETSTORM", "id": "153654" }, { "db": "PACKETSTORM", "id": "153811" }, { "db": "PACKETSTORM", "id": "152509" }, { "db": "CNNVD", "id": "CNNVD-201903-634" }, { "db": "NVD", "id": "CVE-2019-3855" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-03-21T00:00:00", "db": "VULHUB", "id": "VHN-155290" }, { "date": "2019-03-21T00:00:00", "db": "VULMON", "id": "CVE-2019-3855" }, { "date": "2019-04-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "date": "2019-09-29T10:11:11", "db": "PACKETSTORM", "id": "154655" }, { "date": "2019-07-02T14:08:10", "db": "PACKETSTORM", "id": "153510" }, { "date": "2019-05-15T14:55:50", "db": "PACKETSTORM", "id": "152874" }, { "date": "2019-08-07T20:10:33", "db": "PACKETSTORM", "id": "153969" }, { "date": "2019-07-16T20:10:44", "db": "PACKETSTORM", "id": "153654" }, { "date": "2019-07-30T18:13:57", "db": "PACKETSTORM", "id": "153811" }, { "date": "2019-04-15T16:33:02", "db": "PACKETSTORM", "id": "152509" }, { "date": "2019-03-19T00:00:00", "db": "CNNVD", "id": "CNNVD-201903-634" }, { "date": "2019-03-21T21:29:00.433000", "db": "NVD", "id": "CVE-2019-3855" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-10-15T00:00:00", "db": "VULHUB", "id": "VHN-155290" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2019-3855" }, { "date": "2019-04-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-002832" }, { "date": "2021-12-03T00:00:00", "db": "CNNVD", "id": "CNNVD-201903-634" }, { "date": "2023-11-07T03:10:14.793000", "db": "NVD", "id": "CVE-2019-3855" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-201903-634" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libssh2 Integer overflow vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-002832" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "input validation error", "sources": [ { "db": "CNNVD", "id": "CNNVD-201903-634" } ], "trust": 0.6 } }
var-202109-1966
Vulnerability from variot
There's a flaw in urllib's AbstractBasicAuthHandler class. An attacker who controls a malicious HTTP server that an HTTP client (such as web browser) connects to, could trigger a Regular Expression Denial of Service (ReDOS) during an authentication request with a specially crafted payload that is sent by the server to the client. The greatest threat that this flaw poses is to application availability. Python is an open source, object-oriented programming language developed by the Python Foundation. The language is scalable, supports modules and packages, and supports multiple platforms. A code issue vulnerability exists in Python due to a failure in the product to properly handle RCFS. In Python3's Lib/test/multibytecodec_support.py CJK codec tests call eval() on content retrieved via HTTP. (CVE-2020-27619) The package python/cpython is vulnerable to Web Cache Poisoning via urllib.parse.parse_qsl and urllib.parse.parse_qs by using a vector called parameter cloaking. When the attacker can separate query parameters using a semicolon (;), they can cause a difference in the interpretation of the request between the proxy (running with default configuration) and the server. This can result in malicious requests being cached as completely safe ones, as the proxy would usually not see the semicolon as a separator, and therefore would not include it in a cache key of an unkeyed parameter. An improperly handled HTTP response in the HTTP client code of python may allow a remote attacker, who controls the HTTP server, to make the client script enter an infinite loop, consuming CPU time. (CVE-2021-3737) ftplib should not use the host from the PASV response (CVE-2021-4189) A flaw was found in Python, specifically within the urllib.parse module. This module helps break Uniform Resource Locator (URL) strings into components. The issue involves how the urlparse method does not sanitize input and allows characters like r and n in the URL path. This flaw allows an malicious user to input a crafted URL, leading to injection attacks. (CVE-2022-0391). Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- ========================================================================== Ubuntu Security Notice USN-5083-1 September 16, 2021
python3.4, python3.5 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in Python. An attacker could possibly use this issue to cause a denial of service. This issue only affected Ubuntu 16.04 ESM. (CVE-2021-3733)
It was discovered that Python incorrectly handled certain server responses. An attacker could possibly use this issue to cause a denial of service. (CVE-2021-3737)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: python3.5 3.5.2-2ubuntu0~16.04.13+esm1 python3.5-minimal 3.5.2-2ubuntu0~16.04.13+esm1
Ubuntu 14.04 ESM: python3.4 3.4.3-1ubuntu1~14.04.7+esm11 python3.4-minimal 3.4.3-1ubuntu1~14.04.7+esm11
In general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Bugs fixed (https://bugzilla.redhat.com/):
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747 2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity 2013652 - RHACM 2.2.10 images
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method
-
CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack
-
CVE-2021-32627: redis: Integer overflow issue with Streams
-
CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure
-
CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser
-
CVE-2021-32675: redis: Denial of service via Redis Standard Protocol (RESP) request
-
CVE-2021-32687: redis: Integer overflow issue with intsets
-
CVE-2021-32690: helm: information disclosure vulnerability
-
CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite
-
CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite
-
CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name
-
CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow
-
CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings
-
CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim function
-
CVE-2021-41099: redis: Integer overflow issue with strings
Bug fixes:
- RFE ACM Application management UI doesn't reflect object status (Bugzilla
1965321)
-
RHACM 2.4 files (Bugzilla #1983663)
-
Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 (Bugzilla #1993366)
-
submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla
1994668)
-
ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb (Bugzilla #2000274)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2003915)
-
InfraEnv condition does not reflect the actual error message (Bugzilla
2009204, 2010030)
-
Flaky test point to a nil pointer conditions list (Bugzilla #2010175)
-
InfraEnv status shows 'Failed to create image: internal error (Bugzilla
2010272)
- subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla
2013157)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2014084)
-
Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1965321 - RFE ACM Application management UI doesn't reflect object status 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1983663 - RHACM 2.4.0 images 1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite 1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite 1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot 1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb 2003915 - pre-network-manager-config failed due to timeout when static config is used 2009204 - InfraEnv condition does not reflect the actual error message 2010030 - InfraEnv condition does not reflect the actual error message 2010175 - Flaky test point to a nil pointer conditions list 2010272 - InfraEnv status shows 'Failed to create image: internal error 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings 2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks 2014084 - pre-network-manager-config failed due to timeout when static config is used
- Relevant releases/architectures:
Red Hat CodeReady Linux Builder (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
Python is an interpreted, interactive, object-oriented programming language, which includes modules, classes, exceptions, very high level dynamic data types and dynamic typing. Python supports interfaces to many system calls and libraries, as well as to various windowing systems.
The following packages have been upgraded to a later upstream version: python38 (3.8), python38-devel (3.8). (BZ#1997680, BZ#1997860)
Security Fix(es):
-
python: urllib: Regular expression DoS in AbstractBasicAuthHandler (CVE-2021-3733)
-
python-lxml: HTML Cleaner allows crafted and SVG embedded scripts to pass through (CVE-2021-43818)
-
python: urllib.parse does not sanitize URLs containing ASCII newline and tabs (CVE-2022-0391)
-
python: urllib: HTTP client possible infinite loop on a 100 Continue response (CVE-2021-3737)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1995162 - CVE-2021-3737 python: urllib: HTTP client possible infinite loop on a 100 Continue response 1995234 - CVE-2021-3733 python: urllib: Regular expression DoS in AbstractBasicAuthHandler 2004587 - Update the python interpreter to the latest security release 3.8.12 2006789 - RHEL 8 Python 3.8: pip contains bundled pre-built exe files in site-packages/pip/_vendor/distlib/ 2032569 - CVE-2021-43818 python-lxml: HTML Cleaner allows crafted and SVG embedded scripts to pass through 2047376 - CVE-2022-0391 python: urllib.parse does not sanitize URLs containing ASCII newline and tabs
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
Source: Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.src.rpm PyYAML-5.4.1-1.module+el8.5.0+10721+14d8e0d5.src.rpm babel-2.7.0-11.module+el8.5.0+11015+9c1c7c42.src.rpm mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.src.rpm numpy-1.17.3-6.module+el8.5.0+12205+a865257a.src.rpm python-PyMySQL-0.10.1-1.module+el8.4.0+9692+8e86ab84.src.rpm python-asn1crypto-1.2.0-3.module+el8.4.0+8888+89bc7e79.src.rpm python-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.src.rpm python-chardet-3.0.4-19.module+el8.4.0+8888+89bc7e79.src.rpm python-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.src.rpm python-idna-2.8-6.module+el8.4.0+8888+89bc7e79.src.rpm python-jinja2-2.10.3-5.module+el8.5.0+10542+ba057329.src.rpm python-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.src.rpm python-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.src.rpm python-ply-3.11-10.module+el8.4.0+9579+e9717e18.src.rpm python-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.src.rpm python-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.src.rpm python-pycparser-2.19-3.module+el8.4.0+8888+89bc7e79.src.rpm python-pysocks-1.7.1-4.module+el8.4.0+8888+89bc7e79.src.rpm python-requests-2.22.0-9.module+el8.4.0+8888+89bc7e79.src.rpm python-urllib3-1.25.7-5.module+el8.5.0+11639+ea5b349d.src.rpm python-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.src.rpm python38-3.8.12-1.module+el8.6.0+12642+c3710b74.src.rpm python3x-pip-19.3.1-5.module+el8.6.0+13002+70cfc74a.src.rpm python3x-setuptools-41.6.0-5.module+el8.5.0+12205+a865257a.src.rpm python3x-six-1.12.0-10.module+el8.4.0+8888+89bc7e79.src.rpm pytz-2019.3-3.module+el8.4.0+8888+89bc7e79.src.rpm scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.src.rpm
aarch64: Cython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm PyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm numpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm python-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm python-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm python-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm python-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm python38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm python38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm python38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm python38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm python38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm python38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm python38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm python38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm python38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm python38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm python38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm scipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm
noarch: python38-PyMySQL-0.10.1-1.module+el8.4.0+9692+8e86ab84.noarch.rpm python38-asn1crypto-1.2.0-3.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-babel-2.7.0-11.module+el8.5.0+11015+9c1c7c42.noarch.rpm python38-chardet-3.0.4-19.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-idna-2.8-6.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-jinja2-2.10.3-5.module+el8.5.0+10542+ba057329.noarch.rpm python38-numpy-doc-1.17.3-6.module+el8.5.0+12205+a865257a.noarch.rpm python38-pip-19.3.1-5.module+el8.6.0+13002+70cfc74a.noarch.rpm python38-pip-wheel-19.3.1-5.module+el8.6.0+13002+70cfc74a.noarch.rpm python38-ply-3.11-10.module+el8.4.0+9579+e9717e18.noarch.rpm python38-pycparser-2.19-3.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-pysocks-1.7.1-4.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-pytz-2019.3-3.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-requests-2.22.0-9.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-rpm-macros-3.8.12-1.module+el8.6.0+12642+c3710b74.noarch.rpm python38-setuptools-41.6.0-5.module+el8.5.0+12205+a865257a.noarch.rpm python38-setuptools-wheel-41.6.0-5.module+el8.5.0+12205+a865257a.noarch.rpm python38-six-1.12.0-10.module+el8.4.0+8888+89bc7e79.noarch.rpm python38-urllib3-1.25.7-5.module+el8.5.0+11639+ea5b349d.noarch.rpm python38-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.noarch.rpm python38-wheel-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.noarch.rpm
ppc64le: Cython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm PyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm numpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm python-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm python-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm python-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm python38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm python38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm python38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm python38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm python38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm python38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm python38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm python38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm python38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm python38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm python38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm scipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm
s390x: Cython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm PyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm numpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm python-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm python-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm python-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm python-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm python38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm python38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm python38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm python38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm python38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm python38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm python38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm python38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm python38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm python38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm python38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm scipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm
x86_64: Cython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm PyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm numpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm python-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm python-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm python-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm python-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm python38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm python38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm python38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm python38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm python38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm python38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm python38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm python38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm python38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm python38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm python38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm scipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm
Red Hat CodeReady Linux Builder (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
7) - noarch, x86_64
-
The python27 packages provide a stable release of Python 2.7 with a number of additional utilities and database connectors for MySQL and PostgreSQL
{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202109-1966", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.8.0" }, { "model": "codeready linux builder for power little endian", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "enterprise linux server tus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "extra packages for enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "7.0" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "management services for element software and netapp hci", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise linux server update services for sap solutions", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "python", "scope": "eq", "trust": 1.0, "vendor": "python", "version": "3.10.0" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.8.10" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.7.11" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "enterprise linux for ibm z systems", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "enterprise linux server aus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "codeready linux builder for ibm z systems", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "enterprise linux for power little endian eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "enterprise linux for ibm z systems eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "codeready linux builder", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "enterprise linux server for power little endian update services for sap solutions", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "enterprise linux eus", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.4" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.9.5" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.6.14" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "enterprise linux for power little endian", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.7.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.9.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3733" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.9.5", "versionStartIncluding": "3.9.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.7.11", "versionStartIncluding": "3.7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.6.14", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:3.10.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.8.10", "versionStartIncluding": "3.8.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_tus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_eus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_aus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems_eus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_ibm_z_systems:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_for_power_little_endian_eus:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server_for_power_little_endian_update_services_for_sap_solutions:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:redhat:codeready_linux_builder_for_ibm_z_systems:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:redhat:codeready_linux_builder_for_power_little_endian:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:redhat:codeready_linux_builder:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:fedoraproject:extra_packages_for_enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:management_services_for_element_software_and_netapp_hci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3733" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "167023" }, { "db": "PACKETSTORM", "id": "166913" }, { "db": "PACKETSTORM", "id": "167043" } ], "trust": 0.7 }, "cve": "CVE-2021-3733", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.0, "confidentialityImpact": "NONE", "exploitabilityScore": 8.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:S/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.0, "confidentialityImpact": "NONE", "exploitabilityScore": 8.0, "id": "VHN-397442", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:S/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "VULMON", "availabilityImpact": "PARTIAL", "baseScore": 4.0, "confidentialityImpact": "NONE", "exploitabilityScore": 8.0, "id": "CVE-2021-3733", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:S/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3733", "trust": 1.0, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-397442", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3733", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-397442" }, { "db": "VULMON", "id": "CVE-2021-3733" }, { "db": "NVD", "id": "CVE-2021-3733" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There\u0027s a flaw in urllib\u0027s AbstractBasicAuthHandler class. An attacker who controls a malicious HTTP server that an HTTP client (such as web browser) connects to, could trigger a Regular Expression Denial of Service (ReDOS) during an authentication request with a specially crafted payload that is sent by the server to the client. The greatest threat that this flaw poses is to application availability. Python is an open source, object-oriented programming language developed by the Python Foundation. The language is scalable, supports modules and packages, and supports multiple platforms. A code issue vulnerability exists in Python due to a failure in the product to properly handle RCFS. In Python3\u0027s Lib/test/multibytecodec_support.py CJK codec tests call eval() on content retrieved via HTTP. (CVE-2020-27619)\nThe package python/cpython is vulnerable to Web Cache Poisoning via urllib.parse.parse_qsl and urllib.parse.parse_qs by using a vector called parameter cloaking. When the attacker can separate query parameters using a semicolon (;), they can cause a difference in the interpretation of the request between the proxy (running with default configuration) and the server. This can result in malicious requests being cached as completely safe ones, as the proxy would usually not see the semicolon as a separator, and therefore would not include it in a cache key of an unkeyed parameter. An improperly handled HTTP response in the HTTP client code of python may allow a remote attacker, who controls the HTTP server, to make the client script enter an infinite loop, consuming CPU time. (CVE-2021-3737)\nftplib should not use the host from the PASV response (CVE-2021-4189)\nA flaw was found in Python, specifically within the urllib.parse module. This module helps break Uniform Resource Locator (URL) strings into components. The issue involves how the urlparse method does not sanitize input and allows characters like r and n in the URL path. This flaw allows an malicious user to input a crafted URL, leading to injection attacks. (CVE-2022-0391). Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. ==========================================================================\nUbuntu Security Notice USN-5083-1\nSeptember 16, 2021\n\npython3.4, python3.5 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in Python. \nAn attacker could possibly use this issue to cause a denial of service. \nThis issue only affected Ubuntu 16.04 ESM. (CVE-2021-3733)\n\nIt was discovered that Python incorrectly handled certain\nserver responses. An attacker could possibly use this issue to\ncause a denial of service. (CVE-2021-3737)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n python3.5 3.5.2-2ubuntu0~16.04.13+esm1\n python3.5-minimal 3.5.2-2ubuntu0~16.04.13+esm1\n\nUbuntu 14.04 ESM:\n python3.4 3.4.3-1ubuntu1~14.04.7+esm11\n python3.4-minimal 3.4.3-1ubuntu1~14.04.7+esm11\n\nIn general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity\n2013652 - RHACM 2.2.10 images\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method\n\n* CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack\n\n* CVE-2021-32627: redis: Integer overflow issue with Streams\n\n* CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure\n\n* CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser\n\n* CVE-2021-32675: redis: Denial of service via Redis Standard Protocol\n(RESP) request\n\n* CVE-2021-32687: redis: Integer overflow issue with intsets\n\n* CVE-2021-32690: helm: information disclosure vulnerability\n\n* CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing\narbitrary file creation and overwrite\n\n* CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization\nallowing arbitrary file creation and overwrite\n\n* CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are\nfollowed by a pointer to a root domain name\n\n* CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow\n\n* CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings\n\n* CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim\nfunction\n\n* CVE-2021-41099: redis: Integer overflow issue with strings\n\nBug fixes:\n\n* RFE ACM Application management UI doesn\u0027t reflect object status (Bugzilla\n#1965321)\n\n* RHACM 2.4 files (Bugzilla #1983663)\n\n* Hive Operator CrashLoopBackOff when deploying ACM with latest downstream\n2.4 (Bugzilla #1993366)\n\n* submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla\n#1994668)\n\n* ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to\nmulticluster pod in clb (Bugzilla #2000274)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2003915)\n\n* InfraEnv condition does not reflect the actual error message (Bugzilla\n#2009204, 2010030)\n\n* Flaky test point to a nil pointer conditions list (Bugzilla #2010175)\n\n* InfraEnv status shows \u0027Failed to create image: internal error (Bugzilla\n#2010272)\n\n* subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla\n#2013157)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2014084)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1965321 - RFE ACM Application management UI doesn\u0027t reflect object status\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1983663 - RHACM 2.4.0 images\n1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite\n1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite\n1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4\n1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot\n1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb\n2003915 - pre-network-manager-config failed due to timeout when static config is used\n2009204 - InfraEnv condition does not reflect the actual error message\n2010030 - InfraEnv condition does not reflect the actual error message\n2010175 - Flaky test point to a nil pointer conditions list\n2010272 - InfraEnv status shows \u0027Failed to create image: internal error\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks\n2014084 - pre-network-manager-config failed due to timeout when static config is used\n\n5. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nPython is an interpreted, interactive, object-oriented programming\nlanguage, which includes modules, classes, exceptions, very high level\ndynamic data types and dynamic typing. Python supports interfaces to many\nsystem calls and libraries, as well as to various windowing systems. \n\nThe following packages have been upgraded to a later upstream version:\npython38 (3.8), python38-devel (3.8). (BZ#1997680, BZ#1997860)\n\nSecurity Fix(es):\n\n* python: urllib: Regular expression DoS in AbstractBasicAuthHandler\n(CVE-2021-3733)\n\n* python-lxml: HTML Cleaner allows crafted and SVG embedded scripts to pass\nthrough (CVE-2021-43818)\n\n* python: urllib.parse does not sanitize URLs containing ASCII newline and\ntabs (CVE-2022-0391)\n\n* python: urllib: HTTP client possible infinite loop on a 100 Continue\nresponse (CVE-2021-3737)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995162 - CVE-2021-3737 python: urllib: HTTP client possible infinite loop on a 100 Continue response\n1995234 - CVE-2021-3733 python: urllib: Regular expression DoS in AbstractBasicAuthHandler\n2004587 - Update the python interpreter to the latest security release 3.8.12\n2006789 - RHEL 8 Python 3.8: pip contains bundled pre-built exe files in site-packages/pip/_vendor/distlib/\n2032569 - CVE-2021-43818 python-lxml: HTML Cleaner allows crafted and SVG embedded scripts to pass through\n2047376 - CVE-2022-0391 python: urllib.parse does not sanitize URLs containing ASCII newline and tabs\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nCython-0.29.14-4.module+el8.4.0+8888+89bc7e79.src.rpm\nPyYAML-5.4.1-1.module+el8.5.0+10721+14d8e0d5.src.rpm\nbabel-2.7.0-11.module+el8.5.0+11015+9c1c7c42.src.rpm\nmod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.src.rpm\nnumpy-1.17.3-6.module+el8.5.0+12205+a865257a.src.rpm\npython-PyMySQL-0.10.1-1.module+el8.4.0+9692+8e86ab84.src.rpm\npython-asn1crypto-1.2.0-3.module+el8.4.0+8888+89bc7e79.src.rpm\npython-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.src.rpm\npython-chardet-3.0.4-19.module+el8.4.0+8888+89bc7e79.src.rpm\npython-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.src.rpm\npython-idna-2.8-6.module+el8.4.0+8888+89bc7e79.src.rpm\npython-jinja2-2.10.3-5.module+el8.5.0+10542+ba057329.src.rpm\npython-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.src.rpm\npython-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.src.rpm\npython-ply-3.11-10.module+el8.4.0+9579+e9717e18.src.rpm\npython-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.src.rpm\npython-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.src.rpm\npython-pycparser-2.19-3.module+el8.4.0+8888+89bc7e79.src.rpm\npython-pysocks-1.7.1-4.module+el8.4.0+8888+89bc7e79.src.rpm\npython-requests-2.22.0-9.module+el8.4.0+8888+89bc7e79.src.rpm\npython-urllib3-1.25.7-5.module+el8.5.0+11639+ea5b349d.src.rpm\npython-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.src.rpm\npython38-3.8.12-1.module+el8.6.0+12642+c3710b74.src.rpm\npython3x-pip-19.3.1-5.module+el8.6.0+13002+70cfc74a.src.rpm\npython3x-setuptools-41.6.0-5.module+el8.5.0+12205+a865257a.src.rpm\npython3x-six-1.12.0-10.module+el8.4.0+8888+89bc7e79.src.rpm\npytz-2019.3-3.module+el8.4.0+8888+89bc7e79.src.rpm\nscipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.src.rpm\n\naarch64:\nCython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\nPyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm\nnumpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm\npython-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm\npython-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm\npython-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm\npython38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.aarch64.rpm\npython38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm\npython38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm\npython38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.aarch64.rpm\npython38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm\npython38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.aarch64.rpm\npython38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm\npython38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.aarch64.rpm\npython38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\npython38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\npython38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.aarch64.rpm\nscipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.aarch64.rpm\n\nnoarch:\npython38-PyMySQL-0.10.1-1.module+el8.4.0+9692+8e86ab84.noarch.rpm\npython38-asn1crypto-1.2.0-3.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-babel-2.7.0-11.module+el8.5.0+11015+9c1c7c42.noarch.rpm\npython38-chardet-3.0.4-19.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-idna-2.8-6.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-jinja2-2.10.3-5.module+el8.5.0+10542+ba057329.noarch.rpm\npython38-numpy-doc-1.17.3-6.module+el8.5.0+12205+a865257a.noarch.rpm\npython38-pip-19.3.1-5.module+el8.6.0+13002+70cfc74a.noarch.rpm\npython38-pip-wheel-19.3.1-5.module+el8.6.0+13002+70cfc74a.noarch.rpm\npython38-ply-3.11-10.module+el8.4.0+9579+e9717e18.noarch.rpm\npython38-pycparser-2.19-3.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-pysocks-1.7.1-4.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-pytz-2019.3-3.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-requests-2.22.0-9.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-rpm-macros-3.8.12-1.module+el8.6.0+12642+c3710b74.noarch.rpm\npython38-setuptools-41.6.0-5.module+el8.5.0+12205+a865257a.noarch.rpm\npython38-setuptools-wheel-41.6.0-5.module+el8.5.0+12205+a865257a.noarch.rpm\npython38-six-1.12.0-10.module+el8.4.0+8888+89bc7e79.noarch.rpm\npython38-urllib3-1.25.7-5.module+el8.5.0+11639+ea5b349d.noarch.rpm\npython38-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.noarch.rpm\npython38-wheel-wheel-0.33.6-6.module+el8.5.0+12205+a865257a.noarch.rpm\n\nppc64le:\nCython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\nPyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm\nnumpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm\npython-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm\npython-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm\npython-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm\npython38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.ppc64le.rpm\npython38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm\npython38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm\npython38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.ppc64le.rpm\npython38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm\npython38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.ppc64le.rpm\npython38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm\npython38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.ppc64le.rpm\npython38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\npython38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\npython38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.ppc64le.rpm\nscipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.ppc64le.rpm\n\ns390x:\nCython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\nPyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm\nnumpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm\npython-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm\npython-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm\npython-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm\npython38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.s390x.rpm\npython38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm\npython38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm\npython38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.s390x.rpm\npython38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm\npython38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.s390x.rpm\npython38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm\npython38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.s390x.rpm\npython38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\npython38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\npython38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.s390x.rpm\nscipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.s390x.rpm\n\nx86_64:\nCython-debugsource-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\nPyYAML-debugsource-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm\nnumpy-debugsource-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm\npython-cffi-debugsource-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython-cryptography-debugsource-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython-lxml-debugsource-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm\npython-markupsafe-debugsource-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython-psutil-debugsource-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm\npython-psycopg2-debugsource-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-Cython-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-Cython-debuginfo-0.29.14-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-cffi-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-cffi-debuginfo-1.13.2-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-cryptography-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-cryptography-debuginfo-2.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-debug-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-debuginfo-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-debugsource-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-devel-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-idle-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-libs-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-lxml-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm\npython38-lxml-debuginfo-4.4.1-7.module+el8.6.0+13958+214a5473.x86_64.rpm\npython38-markupsafe-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-markupsafe-debuginfo-1.1.1-6.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-mod_wsgi-4.6.8-3.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-numpy-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm\npython38-numpy-debuginfo-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm\npython38-numpy-f2py-1.17.3-6.module+el8.5.0+12205+a865257a.x86_64.rpm\npython38-psutil-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm\npython38-psutil-debuginfo-5.6.4-4.module+el8.5.0+12031+10ce4870.x86_64.rpm\npython38-psycopg2-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-psycopg2-debuginfo-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-psycopg2-doc-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-psycopg2-tests-2.8.4-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-pyyaml-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm\npython38-pyyaml-debuginfo-5.4.1-1.module+el8.5.0+10721+14d8e0d5.x86_64.rpm\npython38-scipy-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-scipy-debuginfo-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\npython38-test-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\npython38-tkinter-3.8.12-1.module+el8.6.0+12642+c3710b74.x86_64.rpm\nscipy-debugsource-1.3.1-4.module+el8.4.0+8888+89bc7e79.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. 7) - noarch, x86_64\n\n3. The python27 packages provide a stable release of\nPython 2.7 with a number of additional utilities and database connectors\nfor MySQL and PostgreSQL", "sources": [ { "db": "NVD", "id": "CVE-2021-3733" }, { "db": "VULHUB", "id": "VHN-397442" }, { "db": "VULMON", "id": "CVE-2021-3733" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164190" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "167023" }, { "db": "PACKETSTORM", "id": "166913" }, { "db": "PACKETSTORM", "id": "167043" } ], "trust": 1.8 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3733", "trust": 2.0 }, { "db": "PACKETSTORM", "id": "164948", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167043", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167023", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165008", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165053", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165337", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165363", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164741", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165361", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164859", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164993", "trust": 0.1 }, { "db": "CNNVD", "id": "CNNVD-202109-1139", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-397442", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3733", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164190", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166913", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-397442" }, { "db": "VULMON", "id": "CVE-2021-3733" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164190" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "167023" }, { "db": "PACKETSTORM", "id": "166913" }, { "db": "PACKETSTORM", "id": "167043" }, { "db": "NVD", "id": "CVE-2021-3733" } ] }, "id": "VAR-202109-1966", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-397442" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:16:30.843000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Red Hat: Moderate: python27-python and python27-python-pip security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221663 - security advisory" }, { "title": "Red Hat: Moderate: python27:2.7 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221821 - security advisory" }, { "title": "IBM: Security Bulletin: IBM Sterling Control Center vulnerable to multiple issues to due IBM Cognos Analystics (CVE-2022-4160, CVE-2021-3733)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9d831a6a306a903e583b6a76777d1085" }, { "title": "Amazon Linux AMI: ALAS-2022-1593", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2022-1593" }, { "title": "Amazon Linux 2: ALAS2-2022-1802", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1802" }, { "title": "IBM: Security Bulletin: IBM Cognos Analytics has addressed multiple vulnerabilities (CVE-2022-34339, CVE-2021-3712, CVE-2021-3711, CVE-2021-4160, CVE-2021-29425, CVE-2021-3733, CVE-2021-3737, CVE-2022-0391, CVE-2021-43138, CVE-2022-24758)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=cbece86f0c3bef5a678f2bb3dbbb854b" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3733" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-400", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-397442" }, { "db": "NVD", "id": "CVE-2021-3733" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20220407-0001/" }, { "trust": 1.2, "url": "https://bugs.python.org/issue43075" }, { "trust": 1.2, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1995234" }, { "trust": 1.2, "url": "https://github.com/python/cpython/commit/7215d1ae25525c92b026166f9d5cac85fb" }, { "trust": 1.2, "url": "https://github.com/python/cpython/pull/24391" }, { "trust": 1.2, "url": "https://ubuntu.com/security/cve-2021-3733" }, { "trust": 1.0, "url": "https://lists.debian.org/debian-lts-announce/2023/05/msg00024.html" }, { "trust": 1.0, "url": "https://lists.debian.org/debian-lts-announce/2023/06/msg00039.html" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0391" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.2, "url": "https://access.redhat.com/errata/rhsa-2022:1663" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36385" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43818" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43818" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/400.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-sterling-control-center-vulnerable-to-multiple-issues-to-due-ibm-cognos-analystics-cve-2022-4160-cve-2021-3733/" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2022-1593.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3575" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30758" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15389" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5727" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30689" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30682" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-18032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1801" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30795" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1788" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30744" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30797" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1799" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21779" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27828" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1871" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29338" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26926" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1789" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30663" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3272" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27824" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5083-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5038" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3795" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22924" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32626" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4618" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32804" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41099" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32804" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32690" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22922" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23017" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32687" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32803" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1764" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1821" } ], "sources": [ { "db": "VULHUB", "id": "VHN-397442" }, { "db": "VULMON", "id": "CVE-2021-3733" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164190" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "167023" }, { "db": "PACKETSTORM", "id": "166913" }, { "db": "PACKETSTORM", "id": "167043" }, { "db": "NVD", "id": "CVE-2021-3733" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-397442" }, { "db": "VULMON", "id": "CVE-2021-3733" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164190" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "165209" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "167023" }, { "db": "PACKETSTORM", "id": "166913" }, { "db": "PACKETSTORM", "id": "167043" }, { "db": "NVD", "id": "CVE-2021-3733" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-10T00:00:00", "db": "VULHUB", "id": "VHN-397442" }, { "date": "2022-03-10T00:00:00", "db": "VULMON", "id": "CVE-2021-3733" }, { "date": "2022-01-20T17:48:29", "db": "PACKETSTORM", "id": "165631" }, { "date": "2021-09-17T16:02:38", "db": "PACKETSTORM", "id": "164190" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-12-09T14:50:37", "db": "PACKETSTORM", "id": "165209" }, { "date": "2021-11-12T17:01:04", "db": "PACKETSTORM", "id": "164948" }, { "date": "2022-05-11T15:31:27", "db": "PACKETSTORM", "id": "167023" }, { "date": "2022-05-02T15:26:53", "db": "PACKETSTORM", "id": "166913" }, { "date": "2022-05-11T15:59:26", "db": "PACKETSTORM", "id": "167043" }, { "date": "2022-03-10T17:42:59.623000", "db": "NVD", "id": "CVE-2021-3733" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-10-26T00:00:00", "db": "VULHUB", "id": "VHN-397442" }, { "date": "2022-10-26T00:00:00", "db": "VULMON", "id": "CVE-2021-3733" }, { "date": "2023-06-30T23:15:09.690000", "db": "NVD", "id": "CVE-2021-3733" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2022-0202-04", "sources": [ { "db": "PACKETSTORM", "id": "165631" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "csrf", "sources": [ { "db": "PACKETSTORM", "id": "166279" } ], "trust": 0.1 } }
var-202101-0565
Vulnerability from variot
There's a flaw in binutils /opcodes/tic4x-dis.c. An attacker who is able to submit a crafted input file to be processed by binutils could cause usage of uninitialized memory. The highest threat is to application availability with a lower threat to data confidentiality. This flaw affects binutils versions prior to 2.34. binutils There is a vulnerability in the use of uninitialized resources.Information is obtained and denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202107-24
https://security.gentoo.org/
Severity: Normal Title: Binutils: Multiple vulnerabilities Date: July 10, 2021 Bugs: #678806, #761957, #764170 ID: 202107-24
Synopsis
Multiple vulnerabilities have been found in Binutils, the worst of which could result in a Denial of Service condition.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.35.2 >= 2.35.2
Description
Multiple vulnerabilities have been discovered in Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.35.2"
References
[ 1 ] CVE-2019-9070 https://nvd.nist.gov/vuln/detail/CVE-2019-9070 [ 2 ] CVE-2019-9071 https://nvd.nist.gov/vuln/detail/CVE-2019-9071 [ 3 ] CVE-2019-9072 https://nvd.nist.gov/vuln/detail/CVE-2019-9072 [ 4 ] CVE-2019-9073 https://nvd.nist.gov/vuln/detail/CVE-2019-9073 [ 5 ] CVE-2019-9074 https://nvd.nist.gov/vuln/detail/CVE-2019-9074 [ 6 ] CVE-2019-9075 https://nvd.nist.gov/vuln/detail/CVE-2019-9075 [ 7 ] CVE-2019-9076 https://nvd.nist.gov/vuln/detail/CVE-2019-9076 [ 8 ] CVE-2019-9077 https://nvd.nist.gov/vuln/detail/CVE-2019-9077 [ 9 ] CVE-2020-19599 https://nvd.nist.gov/vuln/detail/CVE-2020-19599 [ 10 ] CVE-2020-35448 https://nvd.nist.gov/vuln/detail/CVE-2020-35448 [ 11 ] CVE-2020-35493 https://nvd.nist.gov/vuln/detail/CVE-2020-35493 [ 12 ] CVE-2020-35494 https://nvd.nist.gov/vuln/detail/CVE-2020-35494 [ 13 ] CVE-2020-35495 https://nvd.nist.gov/vuln/detail/CVE-2020-35495 [ 14 ] CVE-2020-35496 https://nvd.nist.gov/vuln/detail/CVE-2020-35496 [ 15 ] CVE-2020-35507 https://nvd.nist.gov/vuln/detail/CVE-2020-35507
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202107-24
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0565", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": "lt", "trust": 1.0, "vendor": "gnu", "version": "2.34" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "hci compute node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "NVD", "id": "CVE-2020-35494" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.34", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-35494" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "163455" } ], "trust": 0.1 }, "cve": "CVE-2020-35494", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 5.8, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-35494", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-377690", "impactScore": 4.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 6.1, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 1.8, "impactScore": 4.2, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 6.1, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2020-35494", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-35494", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-080", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377690", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-35494", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377690" }, { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "NVD", "id": "CVE-2020-35494" }, { "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There\u0027s a flaw in binutils /opcodes/tic4x-dis.c. An attacker who is able to submit a crafted input file to be processed by binutils could cause usage of uninitialized memory. The highest threat is to application availability with a lower threat to data confidentiality. This flaw affects binutils versions prior to 2.34. binutils There is a vulnerability in the use of uninitialized resources.Information is obtained and denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202107-24\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Binutils: Multiple vulnerabilities\n Date: July 10, 2021\n Bugs: #678806, #761957, #764170\n ID: 202107-24\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Binutils, the worst of\nwhich could result in a Denial of Service condition. \n\nBackground\n==========\n\nThe GNU Binutils are a collection of tools to create, modify and\nanalyse binary files. Many of the files use BFD, the Binary File\nDescriptor library, to do low-level manipulation. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.35.2 \u003e= 2.35.2 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.35.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-9070\n https://nvd.nist.gov/vuln/detail/CVE-2019-9070\n[ 2 ] CVE-2019-9071\n https://nvd.nist.gov/vuln/detail/CVE-2019-9071\n[ 3 ] CVE-2019-9072\n https://nvd.nist.gov/vuln/detail/CVE-2019-9072\n[ 4 ] CVE-2019-9073\n https://nvd.nist.gov/vuln/detail/CVE-2019-9073\n[ 5 ] CVE-2019-9074\n https://nvd.nist.gov/vuln/detail/CVE-2019-9074\n[ 6 ] CVE-2019-9075\n https://nvd.nist.gov/vuln/detail/CVE-2019-9075\n[ 7 ] CVE-2019-9076\n https://nvd.nist.gov/vuln/detail/CVE-2019-9076\n[ 8 ] CVE-2019-9077\n https://nvd.nist.gov/vuln/detail/CVE-2019-9077\n[ 9 ] CVE-2020-19599\n https://nvd.nist.gov/vuln/detail/CVE-2020-19599\n[ 10 ] CVE-2020-35448\n https://nvd.nist.gov/vuln/detail/CVE-2020-35448\n[ 11 ] CVE-2020-35493\n https://nvd.nist.gov/vuln/detail/CVE-2020-35493\n[ 12 ] CVE-2020-35494\n https://nvd.nist.gov/vuln/detail/CVE-2020-35494\n[ 13 ] CVE-2020-35495\n https://nvd.nist.gov/vuln/detail/CVE-2020-35495\n[ 14 ] CVE-2020-35496\n https://nvd.nist.gov/vuln/detail/CVE-2020-35496\n[ 15 ] CVE-2020-35507\n https://nvd.nist.gov/vuln/detail/CVE-2020-35507\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202107-24\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n", "sources": [ { "db": "NVD", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "VULHUB", "id": "VHN-377690" }, { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "PACKETSTORM", "id": "163455" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-35494", "trust": 2.7 }, { "db": "PACKETSTORM", "id": "163455", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-015128", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202101-080", "trust": 0.7 }, { "db": "VULHUB", "id": "VHN-377690", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-35494", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377690" }, { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35494" }, { "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "id": "VAR-202101-0565", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377690" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:21:26.634000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NTAP-20210212-0007", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "title": "GNU Binutils Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=138341" }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-908", "trust": 1.1 }, { "problemtype": "Use of uninitialized resources (CWE-908) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377690" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "NVD", "id": "CVE-2020-35494" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1911439" }, { "trust": 1.9, "url": "https://security.gentoo.org/glsa/202107-24" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210212-0007/" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35494" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics-for-nps/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163455/gentoo-linux-security-advisory-202107-24.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/binutils-information-disclosure-via-tic4x-print-cond-34253" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-performance-server/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/908.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2020-35494" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35495" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9071" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9077" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9073" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9072" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9074" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35507" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9070" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35496" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9076" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9075" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377690" }, { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35494" }, { "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377690" }, { "db": "VULMON", "id": "CVE-2020-35494" }, { "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35494" }, { "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULHUB", "id": "VHN-377690" }, { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2020-35494" }, { "date": "2021-09-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "date": "2021-07-11T12:01:11", "db": "PACKETSTORM", "id": "163455" }, { "date": "2021-01-04T15:15:13.200000", "db": "NVD", "id": "CVE-2020-35494" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-02T00:00:00", "db": "VULHUB", "id": "VHN-377690" }, { "date": "2022-09-02T00:00:00", "db": "VULMON", "id": "CVE-2020-35494" }, { "date": "2021-09-10T07:59:00", "db": "JVNDB", "id": "JVNDB-2020-015128" }, { "date": "2023-11-07T03:21:55.540000", "db": "NVD", "id": "CVE-2020-35494" }, { "date": "2022-09-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-080" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-080" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "binutils\u00a0 Vulnerability in using uninitialized resources in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015128" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-080" } ], "trust": 0.6 } }
var-202105-1469
Vulnerability from variot
A use of uninitialized value was found in libwebp in versions before 1.0.1 in ReadSymbol(). libwebp There is a vulnerability in the use of uninitialized resources.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Versions of libwebp prior to 1.0.1 have security vulnerabilities. The vulnerability stems from the use of a separate variable in the ReadSymbol function. The biggest threats to this vulnerability are data confidentiality and integrity and system availability. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7
iOS 14.7 and iPadOS 14.7 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212601.
iOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021
ActionKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A shortcut may be able to bypass Internet permission requirements Description: An input validation issue was addressed with improved input validation. CVE-2021-30763: Zachary Keffaber (@QuickUpdate5)
Audio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30781: tr3e
AVEVideoEncoder Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2021-30748: George Nosenko
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Playing a malicious audio file may lead to an unexpected application termination Description: A logic issue was addressed with improved validation. CVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab
CoreGraphics Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A race condition was addressed with improved state handling. CVE-2021-30786: ryuzaki
CoreText Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of Knownsec 404 team
Crash Reporter Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2021-30774: Yizhuo Wang of Group of Software Security In Progress (G.O.S.S.I.P) at Shanghai Jiao Tong University
CVMS Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video Communications
dyld Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved validation. CVE-2021-30768: Linus Henze (pinauten.de)
Find My Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to access Find My data Description: A permissions issue was addressed with improved validation. CVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2021-30760: Sunglin of Knownsec 404 team
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted tiff file may lead to a denial-of-service or potentially disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: A stack overflow was addressed with improved input validation. CVE-2021-30759: hjy79425575 working with Trend Micro Zero Day Initiative
Identity Service Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass code signing checks Description: An issue in code signature validation was addressed with improved checks. CVE-2021-30773: Linus Henze (pinauten.de)
Image Processing Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30802: Matthew Denton of Google Chrome Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of Trend Micro
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A logic issue was addressed with improved state management. CVE-2021-30769: Linus Henze (pinauten.de)
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker that has already achieved kernel code execution may be able to bypass kernel memory mitigations Description: A logic issue was addressed with improved validation. CVE-2021-30770: Linus Henze (pinauten.de)
libxml2 Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-3518
Measure Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Multiple issues in libwebp Description: Multiple issues were addressed by updating to version 1.2.0. CVE-2018-25010 CVE-2018-25011 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to a denial of service Description: A logic issue was addressed with improved validation. CVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-30792: Anonymous working with Trend Micro Zero Day Initiative
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted file may disclose user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30791: Anonymous working with Trend Micro Zero Day Initiative
TCC Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass certain Privacy preferences Description: A logic issue was addressed with improved state management. CVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-30758: Christoph Guttandin of Media Codings
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30795: Sergei Glazunov of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: This issue was addressed with improved checks. CVE-2021-30797: Ivan Fratric of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30799: Sergei Glazunov of Google Project Zero
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Joining a malicious Wi-Fi network may result in a denial of service or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri
Additional recognition
Assets We would like to acknowledge Cees Elzinga for their assistance.
CoreText We would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for their assistance.
Safari We would like to acknowledge an anonymous researcher for their assistance.
Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.
Installation note:
This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/
iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.
To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "14.7"
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6 jjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47 mxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3 DM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L K0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5 3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM JiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1 FSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl r1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+ Wl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc qmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo jOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\x8e1h -----END PGP SIGNATURE-----
. Relevant releases/architectures:
Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch
- Description:
The Qt Image Formats in an add-on module for the core Qt Gui library that provides support for additional image formats including MNG, TGA, TIFF, WBMP, and WebP.
Security Fix(es):
-
libwebp: heap-based buffer overflow in PutLE16() (CVE-2018-25011)
-
libwebp: use of uninitialized value in ReadSymbol() (CVE-2018-25014)
-
libwebp: heap-based buffer overflow in WebPDecode*Into functions (CVE-2020-36328)
-
libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c (CVE-2020-36329)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1956829 - CVE-2020-36328 libwebp: heap-based buffer overflow in WebPDecode*Into functions 1956843 - CVE-2020-36329 libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c 1956919 - CVE-2018-25011 libwebp: heap-based buffer overflow in PutLE16() 1956927 - CVE-2018-25014 libwebp: use of uninitialized value in ReadSymbol()
- Package List:
Red Hat Enterprise Linux Server (v. 7):
Source: qt5-qtimageformats-5.9.7-2.el7_9.src.rpm
ppc64: qt5-qtimageformats-5.9.7-2.el7_9.ppc.rpm qt5-qtimageformats-5.9.7-2.el7_9.ppc64.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc64.rpm
ppc64le: qt5-qtimageformats-5.9.7-2.el7_9.ppc64le.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc64le.rpm
s390x: qt5-qtimageformats-5.9.7-2.el7_9.s390.rpm qt5-qtimageformats-5.9.7-2.el7_9.s390x.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.s390.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.s390x.rpm
x86_64: qt5-qtimageformats-5.9.7-2.el7_9.i686.rpm qt5-qtimageformats-5.9.7-2.el7_9.x86_64.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.i686.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
noarch: qt5-qtimageformats-doc-5.9.7-2.el7_9.noarch.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: qt5-qtimageformats-5.9.7-2.el7_9.src.rpm
x86_64: qt5-qtimageformats-5.9.7-2.el7_9.i686.rpm qt5-qtimageformats-5.9.7-2.el7_9.x86_64.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.i686.rpm qt5-qtimageformats-debuginfo-5.9.7-2.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1775 - [release-5.2] Syslog output is serializing json incorrectly LOG-1824 - [release-5.2] Rejected by Elasticsearch and unexpected json-parsing LOG-1963 - [release-5.2] CLO panic: runtime error: slice bounds out of range [:-1] LOG-1970 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
-
8) - aarch64, ppc64le, s390x, x86_64
-
Description:
The libwebp packages provide a library and tools for the WebP graphics format. WebP is an image format with a lossy compression of digital photographic images. WebP consists of a codec based on the VP8 format, and a container based on the Resource Interchange File Format (RIFF). Webmasters, web developers and browser developers can use WebP to compress, archive, and distribute digital images more efficiently. Bugs fixed (https://bugzilla.redhat.com/):
1956853 - CVE-2020-36330 libwebp: out-of-bounds read in ChunkVerifyAndAssign() in mux/muxread.c 1956856 - CVE-2020-36331 libwebp: out-of-bounds read in ChunkAssignData() in mux/muxinternal.c 1956868 - CVE-2020-36332 libwebp: excessive memory allocation when reading a file 1956917 - CVE-2018-25009 libwebp: out-of-bounds read in WebPMuxCreateInternal 1956918 - CVE-2018-25010 libwebp: out-of-bounds read in ApplyFilter() 1956922 - CVE-2018-25012 libwebp: out-of-bounds read in WebPMuxCreateInternal() 1956926 - CVE-2018-25013 libwebp: out-of-bounds read in ShiftBytes() 1956927 - CVE-2018-25014 libwebp: use of uninitialized value in ReadSymbol()
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can
failed to check the operatorcondition/status
resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools
content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy
in oc import-image
without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node
does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13
to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize
property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade
should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version
is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor
, os
and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: MissingShow details on source websitepanel.styles
attribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommand
should exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explain
output of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirror
command 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=true
does not work foroc adm release new
, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccess
anduseAccessReview
doc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy
2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release new
failed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1469", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "libwebp", "scope": "lt", "trust": 1.0, "vendor": "webmproject", "version": "1.0.1" }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "libwebp", "scope": null, "trust": 0.8, "vendor": "the webm", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:webmproject:libwebp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2018-25014" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 0.5 }, "cve": "CVE-2018-25014", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 7.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2018-25014", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "VHN-391906", "impactScore": 6.4, "integrityImpact": "PARTIAL", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2018-25014", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2018-25014", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202105-1379", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-391906", "trust": 0.1, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2018-25014", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391906" }, { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A use of uninitialized value was found in libwebp in versions before 1.0.1 in ReadSymbol(). libwebp There is a vulnerability in the use of uninitialized resources.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Versions of libwebp prior to 1.0.1 have security vulnerabilities. The vulnerability stems from the use of a separate variable in the ReadSymbol function. The biggest threats to this vulnerability are data confidentiality and integrity and system availability. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7\n\niOS 14.7 and iPadOS 14.7 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212601. \n\niOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021\n\nActionKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A shortcut may be able to bypass Internet permission\nrequirements\nDescription: An input validation issue was addressed with improved\ninput validation. \nCVE-2021-30763: Zachary Keffaber (@QuickUpdate5)\n\nAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30781: tr3e\n\nAVEVideoEncoder\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30748: George Nosenko\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Playing a malicious audio file may lead to an unexpected\napplication termination\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab\n\nCoreGraphics\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2021-30786: ryuzaki\n\nCoreText\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of\nKnownsec 404 team\n\nCrash Reporter\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30774: Yizhuo Wang of Group of Software Security In\nProgress (G.O.S.S.I.P) at Shanghai Jiao Tong University\n\nCVMS\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video\nCommunications\n\ndyld\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30768: Linus Henze (pinauten.de)\n\nFind My\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to access Find My data\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2021-30760: Sunglin of Knownsec 404 team\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted tiff file may lead to a\ndenial-of-service or potentially disclose memory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: A stack overflow was addressed with improved input\nvalidation. \nCVE-2021-30759: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nIdentity Service\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass code signing\nchecks\nDescription: An issue in code signature validation was addressed with\nimproved checks. \nCVE-2021-30773: Linus Henze (pinauten.de)\n\nImage Processing\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30802: Matthew Denton of Google Chrome Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of\nTrend Micro\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30769: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker that has already achieved kernel code execution\nmay be able to bypass kernel memory mitigations\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30770: Linus Henze (pinauten.de)\n\nlibxml2\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-3518\n\nMeasure\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Multiple issues in libwebp\nDescription: Multiple issues were addressed by updating to version\n1.2.0. \nCVE-2018-25010\nCVE-2018-25011\nCVE-2018-25014\nCVE-2020-36328\nCVE-2020-36329\nCVE-2020-36330\nCVE-2020-36331\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-30792: Anonymous working with Trend Micro Zero Day\nInitiative\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted file may disclose user\ninformation\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30791: Anonymous working with Trend Micro Zero Day\nInitiative\n\nTCC\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30758: Christoph Guttandin of Media Codings\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30795: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30797: Ivan Fratric of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30799: Sergei Glazunov of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Joining a malicious Wi-Fi network may result in a denial of\nservice or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nCoreText\nWe would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for\ntheir assistance. \n\nSafari\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"14.7\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6\njjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47\nmxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3\nDM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L\nK0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5\n3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM\nJiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1\nFSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl\nr1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+\nWl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc\nqmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo\njOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\\x8e1h\n-----END PGP SIGNATURE-----\n\n\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nThe Qt Image Formats in an add-on module for the core Qt Gui library that\nprovides support for additional image formats including MNG, TGA, TIFF,\nWBMP, and WebP. \n\nSecurity Fix(es):\n\n* libwebp: heap-based buffer overflow in PutLE16() (CVE-2018-25011)\n\n* libwebp: use of uninitialized value in ReadSymbol() (CVE-2018-25014)\n\n* libwebp: heap-based buffer overflow in WebPDecode*Into functions\n(CVE-2020-36328)\n\n* libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c\n(CVE-2020-36329)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956829 - CVE-2020-36328 libwebp: heap-based buffer overflow in WebPDecode*Into functions\n1956843 - CVE-2020-36329 libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c\n1956919 - CVE-2018-25011 libwebp: heap-based buffer overflow in PutLE16()\n1956927 - CVE-2018-25014 libwebp: use of uninitialized value in ReadSymbol()\n\n6. Package List:\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nqt5-qtimageformats-5.9.7-2.el7_9.src.rpm\n\nppc64:\nqt5-qtimageformats-5.9.7-2.el7_9.ppc.rpm\nqt5-qtimageformats-5.9.7-2.el7_9.ppc64.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc64.rpm\n\nppc64le:\nqt5-qtimageformats-5.9.7-2.el7_9.ppc64le.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.ppc64le.rpm\n\ns390x:\nqt5-qtimageformats-5.9.7-2.el7_9.s390.rpm\nqt5-qtimageformats-5.9.7-2.el7_9.s390x.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.s390.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.s390x.rpm\n\nx86_64:\nqt5-qtimageformats-5.9.7-2.el7_9.i686.rpm\nqt5-qtimageformats-5.9.7-2.el7_9.x86_64.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.i686.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nqt5-qtimageformats-doc-5.9.7-2.el7_9.noarch.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nqt5-qtimageformats-5.9.7-2.el7_9.src.rpm\n\nx86_64:\nqt5-qtimageformats-5.9.7-2.el7_9.i686.rpm\nqt5-qtimageformats-5.9.7-2.el7_9.x86_64.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.i686.rpm\nqt5-qtimageformats-debuginfo-5.9.7-2.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1775 - [release-5.2] Syslog output is serializing json incorrectly\nLOG-1824 - [release-5.2] Rejected by Elasticsearch and unexpected json-parsing\nLOG-1963 - [release-5.2] CLO panic: runtime error: slice bounds out of range [:-1]\nLOG-1970 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libwebp packages provide a library and tools for the WebP graphics\nformat. WebP is an image format with a lossy compression of digital\nphotographic images. WebP consists of a codec based on the VP8 format, and\na container based on the Resource Interchange File Format (RIFF). \nWebmasters, web developers and browser developers can use WebP to compress,\narchive, and distribute digital images more efficiently. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956853 - CVE-2020-36330 libwebp: out-of-bounds read in ChunkVerifyAndAssign() in mux/muxread.c\n1956856 - CVE-2020-36331 libwebp: out-of-bounds read in ChunkAssignData() in mux/muxinternal.c\n1956868 - CVE-2020-36332 libwebp: excessive memory allocation when reading a file\n1956917 - CVE-2018-25009 libwebp: out-of-bounds read in WebPMuxCreateInternal\n1956918 - CVE-2018-25010 libwebp: out-of-bounds read in ApplyFilter()\n1956922 - CVE-2018-25012 libwebp: out-of-bounds read in WebPMuxCreateInternal()\n1956926 - CVE-2018-25013 libwebp: out-of-bounds read in ShiftBytes()\n1956927 - CVE-2018-25014 libwebp: use of uninitialized value in ReadSymbol()\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID: RHSA-2022:5069-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5069\nIssue date: 2022-08-10\nCVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "VULHUB", "id": "VHN-391906" }, { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 2.34 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2018-25014", "trust": 4.0 }, { "db": "PACKETSTORM", "id": "164842", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163028", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168042", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2018-016583", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.7 }, { "db": "CNNVD", "id": "CNNVD-202105-1379", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163645", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.2036", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0245", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2485.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1880", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1965", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3905", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3977", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4254", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3789", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4229", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021072216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060725", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061420", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060939", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "165287", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165296", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-391906", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2018-25014", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391906" }, { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "id": "VAR-202105-1469", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391906" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:21:37.217000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "HT212601 Red hat Red\u00a0Hat\u00a0Bugzilla", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "title": "libwebp Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=151878" }, { "title": "Amazon Linux 2: ALAS2-2021-1679", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1679" }, { "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2018-25014 " } ], "sources": [ { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "CNNVD", "id": "CNNVD-202105-1379" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-908", "trust": 1.1 }, { "problemtype": "Use of uninitialized resources (CWE-908) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391906" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9496" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956927" }, { "trust": 1.8, "url": "https://chromium.googlesource.com/webm/libwebp/+log/78ad57a36ad69a9c22874b182d49d64125c380f2..907208f97ead639bd52" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.6, "url": "http://seclists.org/fulldisclosure/2021/jul/54" }, { "trust": 0.6, "url": "https://support.apple.com/kb/ht212601" }, { "trust": 0.6, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "trust": 0.6, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "trust": 0.6, "url": "https://www.debian.org/security/2021/dsa-4930" }, { "trust": 0.6, "url": "https://security.netapp.com/advisory/ntap-20211104-0004/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0245" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3977" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168042/red-hat-security-advisory-2022-5069-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163028/red-hat-security-advisory-2021-2328-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1965" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165286/red-hat-security-advisory-2021-5128-06.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3789" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3905" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4229" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht212601" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060939" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1880" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061420" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165631/red-hat-security-advisory-2022-0202-04.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4254" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2036" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2102" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libwebp-six-vulnerabilities-35579" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164842/red-hat-security-advisory-2021-4231-04.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164967/red-hat-security-advisory-2021-4627-01.html" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/908.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2018-25014" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2/alas-2021-1679.html" }, { "trust": 0.1, "url": "https://support.apple.com/ht212601." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30768" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30781" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30780" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30759" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30789" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30775" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30748" }, { "trust": 0.1, "url": "https://www.apple.com/itunes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30763" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30760" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30770" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30769" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30785" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2328" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36329" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36328" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25011" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5127" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4231" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36332" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43818" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26945" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38593" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23648" }, { "trust": 0.1, "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4156" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5069" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29162" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://10.0.0.7:2379" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1706" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30323" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391906" }, { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391906" }, { "db": "VULMON", "id": "CVE-2018-25014" }, { "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163028" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "db": "NVD", "id": "CVE-2018-25014" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-21T00:00:00", "db": "VULHUB", "id": "VHN-391906" }, { "date": "2021-05-21T00:00:00", "db": "VULMON", "id": "CVE-2018-25014" }, { "date": "2022-02-02T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "date": "2021-07-23T15:29:39", "db": "PACKETSTORM", "id": "163645" }, { "date": "2021-06-09T13:21:49", "db": "PACKETSTORM", "id": "163028" }, { "date": "2021-12-15T15:20:43", "db": "PACKETSTORM", "id": "165287" }, { "date": "2021-11-10T17:05:32", "db": "PACKETSTORM", "id": "164842" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2022-08-10T15:56:22", "db": "PACKETSTORM", "id": "168042" }, { "date": "2021-05-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "date": "2021-05-21T17:15:08.203000", "db": "NVD", "id": "CVE-2018-25014" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-09T00:00:00", "db": "VULHUB", "id": "VHN-391906" }, { "date": "2023-02-09T00:00:00", "db": "VULMON", "id": "CVE-2018-25014" }, { "date": "2022-02-02T01:15:00", "db": "JVNDB", "id": "JVNDB-2018-016583" }, { "date": "2022-08-12T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1379" }, { "date": "2023-02-09T02:24:26.620000", "db": "NVD", "id": "CVE-2018-25014" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1379" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libwebp\u00a0 Vulnerability in using uninitialized resources in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016583" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1379" } ], "trust": 0.6 } }
var-202103-1463
Vulnerability from variot
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default. Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check. An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates. If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application. In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose. OpenSSL versions 1.1.1h and newer are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1h-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. Exploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
Bug fix:
-
RHACM 2.0.10 images (BZ #1940452)
-
Bugs fixed (https://bugzilla.redhat.com/):
1940452 - RHACM 2.0.10 images 1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.1.6 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Bug fixes:
-
RHACM 2.1.6 images (BZ#1940581)
-
When generating the import cluster string, it can include unescaped characters (BZ#1934184)
-
Bugs fixed (https://bugzilla.redhat.com/):
1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1929338 - CVE-2020-35149 mquery: Code injection via merge or clone operation 1934184 - When generating the import cluster string, it can include unescaped characters 1940581 - RHACM 2.1.6 images
- Relevant releases/architectures:
Red Hat JBoss Core Services on RHEL 7 Server - noarch, ppc64, x86_64
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT 1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing
- Package List:
Red Hat JBoss Core Services on RHEL 7 Server:
Source: jbcs-httpd24-httpd-2.4.37-70.jbcs.el7.src.rpm jbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.src.rpm jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.src.rpm jbcs-httpd24-mod_jk-1.2.48-13.redhat_1.jbcs.el7.src.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.src.rpm jbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.src.rpm jbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.src.rpm jbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.src.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.src.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.src.rpm
noarch: jbcs-httpd24-httpd-manual-2.4.37-70.jbcs.el7.noarch.rpm
ppc64: jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.ppc64.rpm jbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.ppc64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.ppc64.rpm
x86_64: jbcs-httpd24-httpd-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-debuginfo-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-devel-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-selinux-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-httpd-tools-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_cluster-native-debuginfo-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-ap24-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-debuginfo-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_jk-manual-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_ldap-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_proxy_html-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_security-debuginfo-2.9.2-60.GA.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_session-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-mod_ssl-2.4.37-70.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-debuginfo-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-nghttp2-devel-1.39.2-37.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-debuginfo-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-devel-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-libs-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-perl-1.1.1g-6.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.x86_64.rpm jbcs-httpd24-openssl-static-1.1.1g-6.jbcs.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache HTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector (mod_jk), JBoss HTTP Connector (mod_cluster), Hibernate, and the Tomcat Native library.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-1463", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "storagegrid", "scope": "eq", "trust": 2.0, "vendor": "netapp", "version": null }, { "model": "capture client", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "3.6.24" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "18.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.16.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "mysql enterprise monitor", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "10.24.1" }, { "model": "weblogic server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.1.1.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.6" }, { "model": "nessus agent", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "8.2.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "weblogic server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1k" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "commerce guided search", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "11.3.2" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "sonicos", "scope": "lte", "trust": 1.0, "vendor": "sonicwall", "version": "7.0.1-r1456" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "19.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "peoplesoft enterprise peopletools", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "secure backup", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "18.1.0.1.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "windriver", "version": "17.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "email security", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.0.11" }, { "model": "peoplesoft enterprise peopletools", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1h" }, { "model": "cloud volumes ontap mediator", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "freebsd", "scope": "eq", "trust": 1.0, "vendor": "freebsd", "version": "12.2" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "nessus agent", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.2.3" }, { "model": "mysql connectors", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "sma100", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.1.0-17sv" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.22.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.13.1" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3450" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1k", "versionStartIncluding": "1.1.1h", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:santricity_smi-s_provider_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:storagegrid_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:windriver:linux:-:*:*:*:cd:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:18.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:19.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:windriver:linux:17.0:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_volumes_ontap_mediator:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:nessus_agent:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.2.3", "versionStartIncluding": "8.2.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.13.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:12.2.1.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:weblogic_server:14.1.1.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:commerce_guided_search:11.3.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_connectors:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_backup:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.1.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.59", "versionStartIncluding": "8.57", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sma100_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.2.1.0-17sv", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:sonicwall:sma100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sonicos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.0.1-r1456", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sonicwall:email_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.0.11", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:sonicwall:capture_client:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.6.24", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "14.16.1", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "12.22.1", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "10.24.1", "versionStartIncluding": "10.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3450" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 0.9 }, "cve": "CVE-2021-3450", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-388430", "impactScore": 4.9, "integrityImpact": "PARTIAL", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "CVE-2021-3450", "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 7.4, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 5.2, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3450", "trust": 1.0, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-388430", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3450", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default. Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check. An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates. If a \"purpose\" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named \"purpose\" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application. In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose. OpenSSL versions 1.1.1h and newer are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1h-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. \nExploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. \n\nBug fix:\n\n* RHACM 2.0.10 images (BZ #1940452)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1940452 - RHACM 2.0.10 images\n1944286 - CVE-2021-23358 nodejs-underscore: Arbitrary code execution via the template function\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.1.6 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBug fixes:\n\n* RHACM 2.1.6 images (BZ#1940581)\n\n* When generating the import cluster string, it can include unescaped\ncharacters (BZ#1934184)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1929338 - CVE-2020-35149 mquery: Code injection via merge or clone operation\n1934184 - When generating the import cluster string, it can include unescaped characters\n1940581 - RHACM 2.1.6 images\n\n5. Relevant releases/architectures:\n\nRed Hat JBoss Core Services on RHEL 7 Server - noarch, ppc64, x86_64\n\n3. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT\n1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing\n\n6. Package List:\n\nRed Hat JBoss Core Services on RHEL 7 Server:\n\nSource:\njbcs-httpd24-httpd-2.4.37-70.jbcs.el7.src.rpm\njbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.src.rpm\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.src.rpm\njbcs-httpd24-mod_jk-1.2.48-13.redhat_1.jbcs.el7.src.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.src.rpm\njbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.src.rpm\njbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.src.rpm\njbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.src.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.src.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.src.rpm\n\nnoarch:\njbcs-httpd24-httpd-manual-2.4.37-70.jbcs.el7.noarch.rpm\n\nppc64:\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.ppc64.rpm\njbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.ppc64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.ppc64.rpm\n\nx86_64:\njbcs-httpd24-httpd-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-debuginfo-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-devel-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-selinux-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-httpd-tools-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_cluster-native-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_cluster-native-debuginfo-1.3.14-20.Final_redhat_2.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_http2-1.15.7-14.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_http2-debuginfo-1.15.7-14.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-ap24-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-debuginfo-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_jk-manual-1.2.48-13.redhat_1.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_ldap-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_md-2.0.8-33.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_md-debuginfo-2.0.8-33.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_proxy_html-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_security-2.9.2-60.GA.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_security-debuginfo-2.9.2-60.GA.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_session-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-mod_ssl-2.4.37-70.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-debuginfo-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-nghttp2-devel-1.39.2-37.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-chil-1.0.0-5.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-chil-debuginfo-1.0.0-5.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-debuginfo-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-devel-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-libs-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-perl-1.1.1g-6.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-0.4.10-20.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-pkcs11-debuginfo-0.4.10-20.jbcs.el7.x86_64.rpm\njbcs-httpd24-openssl-static-1.1.1g-6.jbcs.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nHTTP Server, the Apache Tomcat Servlet container, Apache Tomcat Connector\n(mod_jk), JBoss HTTP Connector (mod_cluster), Hibernate, and the Tomcat\nNative library. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-3450" }, { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3450", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/3", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/2", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/4", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/1", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-05", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-08", "trust": 1.2 }, { "db": "PULSESECURE", "id": "SA44845", "trust": 1.2 }, { "db": "MCAFEE", "id": "SB10356", "trust": 1.2 }, { "db": "PACKETSTORM", "id": "162337", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162196", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162383", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162201", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162183", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162197", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163257", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162172", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162307", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162200", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162699", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-388430", "trust": 0.1 }, { "db": "ICS CERT", "id": "ICSA-22-069-09", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3450", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162694", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "id": "VAR-202103-1463", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-388430" } ], "trust": 0.38583214499999996 }, "last_update_date": "2024-07-23T21:05:39.679000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "The Register", "trust": 0.2, "url": "https://www.theregister.co.uk/2021/03/25/openssl_bug_fix/" }, { "title": "Red Hat: CVE-2021-3450", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3450" }, { "title": "IBM: Security Bulletin: OpenSSL Vulnerabilities Affect IBM Sterling Connect:Express for UNIX (CVE-2021-3449, CVE-2021-3450)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=084930e972e3fa390ca483e019684fa8" }, { "title": "Arch Linux Advisories: [ASA-202103-10] openssl: multiple issues", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202103-10" }, { "title": "Amazon Linux 2: ALAS2-2021-1622", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1622" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3450 log" }, { "title": "Cisco: Multiple Vulnerabilities in OpenSSL Affecting Cisco Products: March 2021", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=cisco_security_advisories_and_alerts_ciscoproducts\u0026qid=cisco-sa-openssl-2021-ghy28djd" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.2 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-05" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-117" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "IBM: Security Bulletin: Vulnerabilities in XStream, Java, OpenSSL, WebSphere Application Server Liberty and Node.js affect IBM Spectrum Control", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=928e1f86fc9400462623e646ce4f11d9" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=4a9822530e6b610875f83ffc10e02aba" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "yr_of_the_jellyfish", "trust": 0.1, "url": "https://github.com/rnbochsr/yr_of_the_jellyfish " }, { "title": "", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "tekton-image-scan-trivy", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "", "trust": 0.1, "url": "https://github.com/teresaweber685/book_list " }, { "title": "", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " }, { "title": "BleepingComputer", "trust": 0.1, "url": "https://www.bleepingcomputer.com/news/security/openssl-fixes-severe-dos-certificate-validation-vulnerabilities/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3450" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-295", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.3, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-openssl-2021-ghy28djd" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.2, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44845" }, { "trust": 1.2, "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2021-0013" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210326-0006/" }, { "trust": 1.2, "url": "https://www.openssl.org/news/secadv/20210325.txt" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-05" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-08" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.2, "url": "https://mta.openssl.org/pipermail/openssl-announce/2021-march/000198.html" }, { "trust": 1.2, "url": "https://security.freebsd.org/advisories/freebsd-sa-21:07.openssl.asc" }, { "trust": 1.2, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/1" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/2" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/3" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/4" }, { "trust": 1.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10356" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2a40b7bc7b94dd7de897a74571e7024f0cf0d63b" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.3, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3347" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28374" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27364" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27152" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27363" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27365" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-26708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2a40b7bc7b94dd7de897a74571e7024f0cf0d63b" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10356" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/295.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-069-09" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20907" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13630" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20387" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/serverless_applications/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3115" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-16935" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-6405" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20388" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2021" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13631" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14422" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-6405" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20916" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35149" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35149" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1199" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" } ], "sources": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-388430" }, { "db": "VULMON", "id": "CVE-2021-3450" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162383" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "162337" }, { "db": "PACKETSTORM", "id": "162196" }, { "db": "PACKETSTORM", "id": "162201" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-25T00:00:00", "db": "VULHUB", "id": "VHN-388430" }, { "date": "2021-03-25T00:00:00", "db": "VULMON", "id": "CVE-2021-3450" }, { "date": "2021-05-19T14:19:18", "db": "PACKETSTORM", "id": "162694" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-04-29T14:37:49", "db": "PACKETSTORM", "id": "162383" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-04-14T16:40:32", "db": "PACKETSTORM", "id": "162183" }, { "date": "2021-04-26T19:21:56", "db": "PACKETSTORM", "id": "162337" }, { "date": "2021-04-15T13:49:54", "db": "PACKETSTORM", "id": "162196" }, { "date": "2021-04-15T13:50:39", "db": "PACKETSTORM", "id": "162201" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2021-03-25T15:15:13.560000", "db": "NVD", "id": "CVE-2021-3450" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-28T00:00:00", "db": "VULHUB", "id": "VHN-388430" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3450" }, { "date": "2023-11-07T03:38:00.923000", "db": "NVD", "id": "CVE-2021-3450" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-2021-01", "sources": [ { "db": "PACKETSTORM", "id": "162694" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution", "sources": [ { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "162383" } ], "trust": 0.2 } }
var-202101-0567
Vulnerability from variot
There's a flaw in bfd_pef_scan_start_address() of bfd/pef.c in binutils which could allow an attacker who is able to submit a crafted file to be processed by objdump to cause a NULL pointer dereference. The greatest threat of this flaw is to application availability. This flaw affects binutils versions prior to 2.34. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202107-24
https://security.gentoo.org/
Severity: Normal Title: Binutils: Multiple vulnerabilities Date: July 10, 2021 Bugs: #678806, #761957, #764170 ID: 202107-24
Synopsis
Multiple vulnerabilities have been found in Binutils, the worst of which could result in a Denial of Service condition.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.35.2 >= 2.35.2
Description
Multiple vulnerabilities have been discovered in Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.35.2"
References
[ 1 ] CVE-2019-9070 https://nvd.nist.gov/vuln/detail/CVE-2019-9070 [ 2 ] CVE-2019-9071 https://nvd.nist.gov/vuln/detail/CVE-2019-9071 [ 3 ] CVE-2019-9072 https://nvd.nist.gov/vuln/detail/CVE-2019-9072 [ 4 ] CVE-2019-9073 https://nvd.nist.gov/vuln/detail/CVE-2019-9073 [ 5 ] CVE-2019-9074 https://nvd.nist.gov/vuln/detail/CVE-2019-9074 [ 6 ] CVE-2019-9075 https://nvd.nist.gov/vuln/detail/CVE-2019-9075 [ 7 ] CVE-2019-9076 https://nvd.nist.gov/vuln/detail/CVE-2019-9076 [ 8 ] CVE-2019-9077 https://nvd.nist.gov/vuln/detail/CVE-2019-9077 [ 9 ] CVE-2020-19599 https://nvd.nist.gov/vuln/detail/CVE-2020-19599 [ 10 ] CVE-2020-35448 https://nvd.nist.gov/vuln/detail/CVE-2020-35448 [ 11 ] CVE-2020-35493 https://nvd.nist.gov/vuln/detail/CVE-2020-35493 [ 12 ] CVE-2020-35494 https://nvd.nist.gov/vuln/detail/CVE-2020-35494 [ 13 ] CVE-2020-35495 https://nvd.nist.gov/vuln/detail/CVE-2020-35495 [ 14 ] CVE-2020-35496 https://nvd.nist.gov/vuln/detail/CVE-2020-35496 [ 15 ] CVE-2020-35507 https://nvd.nist.gov/vuln/detail/CVE-2020-35507
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202107-24
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0567", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": "lt", "trust": 1.0, "vendor": "gnu", "version": "2.34" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "hci compute node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "NVD", "id": "CVE-2020-35496" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.34", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-35496" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "163455" } ], "trust": 0.1 }, "cve": "CVE-2020-35496", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35496", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-377692", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.5, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35496", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-35496", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-054", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377692", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-35496", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377692" }, { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "NVD", "id": "CVE-2020-35496" }, { "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There\u0027s a flaw in bfd_pef_scan_start_address() of bfd/pef.c in binutils which could allow an attacker who is able to submit a crafted file to be processed by objdump to cause a NULL pointer dereference. The greatest threat of this flaw is to application availability. This flaw affects binutils versions prior to 2.34. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202107-24\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Binutils: Multiple vulnerabilities\n Date: July 10, 2021\n Bugs: #678806, #761957, #764170\n ID: 202107-24\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Binutils, the worst of\nwhich could result in a Denial of Service condition. \n\nBackground\n==========\n\nThe GNU Binutils are a collection of tools to create, modify and\nanalyse binary files. Many of the files use BFD, the Binary File\nDescriptor library, to do low-level manipulation. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.35.2 \u003e= 2.35.2 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.35.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-9070\n https://nvd.nist.gov/vuln/detail/CVE-2019-9070\n[ 2 ] CVE-2019-9071\n https://nvd.nist.gov/vuln/detail/CVE-2019-9071\n[ 3 ] CVE-2019-9072\n https://nvd.nist.gov/vuln/detail/CVE-2019-9072\n[ 4 ] CVE-2019-9073\n https://nvd.nist.gov/vuln/detail/CVE-2019-9073\n[ 5 ] CVE-2019-9074\n https://nvd.nist.gov/vuln/detail/CVE-2019-9074\n[ 6 ] CVE-2019-9075\n https://nvd.nist.gov/vuln/detail/CVE-2019-9075\n[ 7 ] CVE-2019-9076\n https://nvd.nist.gov/vuln/detail/CVE-2019-9076\n[ 8 ] CVE-2019-9077\n https://nvd.nist.gov/vuln/detail/CVE-2019-9077\n[ 9 ] CVE-2020-19599\n https://nvd.nist.gov/vuln/detail/CVE-2020-19599\n[ 10 ] CVE-2020-35448\n https://nvd.nist.gov/vuln/detail/CVE-2020-35448\n[ 11 ] CVE-2020-35493\n https://nvd.nist.gov/vuln/detail/CVE-2020-35493\n[ 12 ] CVE-2020-35494\n https://nvd.nist.gov/vuln/detail/CVE-2020-35494\n[ 13 ] CVE-2020-35495\n https://nvd.nist.gov/vuln/detail/CVE-2020-35495\n[ 14 ] CVE-2020-35496\n https://nvd.nist.gov/vuln/detail/CVE-2020-35496\n[ 15 ] CVE-2020-35507\n https://nvd.nist.gov/vuln/detail/CVE-2020-35507\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202107-24\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n", "sources": [ { "db": "NVD", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "VULHUB", "id": "VHN-377692" }, { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "PACKETSTORM", "id": "163455" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-35496", "trust": 2.7 }, { "db": "PACKETSTORM", "id": "163455", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-015126", "trust": 0.8 }, { "db": "AUSCERT", "id": "ESB-2021.3660", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202101-054", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-377692", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-35496", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377692" }, { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35496" }, { "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "id": "VAR-202101-0567", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377692" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:31:53.886000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a025308 NetAppNetApp\u00a0Advisory", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "title": "GNU Binutils Fixes for code issue vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=138318" }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377692" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "NVD", "id": "CVE-2020-35496" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1911444" }, { "trust": 1.9, "url": "https://security.gentoo.org/glsa/202107-24" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210212-0007/" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35496" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3660" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/binutils-null-pointer-dereference-via-bfd-pef-scan-start-address-34255" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics-for-nps/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163455/gentoo-linux-security-advisory-202107-24.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-performance-server/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2020-35496" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35495" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9071" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9077" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9073" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9072" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9074" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35507" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9070" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9076" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9075" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35494" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377692" }, { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35496" }, { "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377692" }, { "db": "VULMON", "id": "CVE-2020-35496" }, { "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35496" }, { "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULHUB", "id": "VHN-377692" }, { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2020-35496" }, { "date": "2021-09-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "date": "2021-07-11T12:01:11", "db": "PACKETSTORM", "id": "163455" }, { "date": "2021-01-04T15:15:14.323000", "db": "NVD", "id": "CVE-2020-35496" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-02T00:00:00", "db": "VULHUB", "id": "VHN-377692" }, { "date": "2022-09-02T00:00:00", "db": "VULMON", "id": "CVE-2020-35496" }, { "date": "2021-09-10T07:10:00", "db": "JVNDB", "id": "JVNDB-2020-015126" }, { "date": "2023-11-07T03:21:55.700000", "db": "NVD", "id": "CVE-2020-35496" }, { "date": "2022-09-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-054" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-054" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "binutils\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015126" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-054" } ], "trust": 0.6 } }
var-202106-1921
Vulnerability from variot
A security issue in nginx resolver was identified, which might allow an attacker who is able to forge UDP packets from the DNS server to cause 1-byte memory overwrite, resulting in worker process crash or potential other impact. nginx The resolver contains a vulnerability in determining boundary conditions.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server of Nginx Company in the United States. Affected products and versions are as follows: nginx: 0.6.18, 0.6.19 0.6.20, 0.6.21, 0.6.22 0.6.23, 0.6.24, 0.6.25, 0.6.26, 0.6.27, 0.6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202105-38
https://security.gentoo.org/
Severity: High Title: nginx: Remote code execution Date: May 26, 2021 Bugs: #792087 ID: 202105-38
Synopsis
A vulnerability in nginx could lead to remote code execution.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 www-servers/nginx < 1.21.0 >= 1.20.1:0 >= 1.21.0:mainline
Description
It was discovered that nginx did not properly handle DNS responses when "resolver" directive is used.
Workaround
There is no known workaround at this time.
Resolution
All nginx users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=www-servers/nginx-1.20.1"
All nginx mainline users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot -v ">=www-servers/nginx-1.21.0:mainline"
References
[ 1 ] CVE-2021-23017 https://nvd.nist.gov/vuln/detail/CVE-2021-23017
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202105-38
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: rh-nginx118-nginx security update Advisory ID: RHSA-2021:2258-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2021:2258 Issue date: 2021-06-07 CVE Names: CVE-2021-23017 =====================================================================
- Summary:
An update for rh-nginx118-nginx is now available for Red Hat Software Collections.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.7) - ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - x86_64
- Description:
nginx is a web and proxy server supporting HTTP and other protocols, with a focus on high concurrency, performance, and low memory usage.
Security Fix(es):
- nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name (CVE-2021-23017)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The rh-nginx118-nginx service must be restarted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name
- Package List:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):
Source: rh-nginx118-nginx-1.18.0-3.el7.src.rpm
ppc64le: rh-nginx118-nginx-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.ppc64le.rpm
s390x: rh-nginx118-nginx-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.s390x.rpm
x86_64: rh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm
Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.7):
Source: rh-nginx118-nginx-1.18.0-3.el7.src.rpm
ppc64le: rh-nginx118-nginx-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.ppc64le.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.ppc64le.rpm
s390x: rh-nginx118-nginx-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.s390x.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.s390x.rpm
x86_64: rh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm
Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):
Source: rh-nginx118-nginx-1.18.0-3.el7.src.rpm
x86_64: rh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm rh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-23017 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYL3MN9zjgjWX9erEAQjMKA//YaSwGZ/DmvwILuYqYNbIGKvcatycisD6 RrS+A7J9QqTEKqC8mZQ/OvfS5TukanQ/jzTNfRuGuO7booPRlhqVxZVLrSgQNaVD 1FV/cQqXhS/FwmrM8wnWdLpsFUXRXsTqiOoUnymzZbSh1VDjB8VZZLjWc7Wnueqy clLQnYtwMT5axzXRJl/JiXs+yJBmzv5igSFMoGXEKDx6DTrWGtZENE1rpumPAjb6 Y3aDzDZYu4Bl9V1FCUOtksWnmP0Xl/kvSL31aUkyYbyi9i0DpQswmdBH4Bl5ulw2 skkKH69ixA1wu+2D128toUy2ZR/MjX88sH3bCahhY1G4ajp0Vl3/p/kM7VVR5uRi KTVNK8FueNIvp8fMp8oYKhZW9It5DzlMa0Q1QcFfsutgf+932up8qJ9o0mQ9AbVK fBYb8F0hYMDI8udy+npgUM0WwwiBQAqzcHmbnYIRt6IK5f/dUOqucugiJFsbyTl2 pIcJty1208RbrDE/ctTcKuyVbHH9pPOHql5rFlJLAh7yYdHWh6J1QhmdA1RNm51h MEgO5OOVUjrV2mye1c8o7EkTzvuhu2RWQ7WyQc6C81ZlcUcjfNnq73vJ9HBNtNT5 hsiDG/UdvY/thIQmqzSFI3z8ALFKPRUcJ91v/fZNRpBTxcsluN91X7XrHIQDNOs9 jVrMgzAG88I= =av6T -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8.2) - aarch64, noarch, ppc64le, s390x, x86_64
- Summary:
Red Hat Advanced Cluster Management for Kubernetes 2.3.3 General Availability release images, which fix bugs, provide security fixes, and update container images. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Note: Because Red Hat OpenShift Container Platform version 4.9 was just released, the functional testing of the compatibility between Red Hat Advanced Cluster Management 2.3.3 and Red Hat OpenShift Container Platform version 4.9 is still in progress.
Security fixes:
-
nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name (CVE-2021-23017)
-
redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)
-
redis: Integer overflow issue with Streams (CVE-2021-32627)
-
redis: Integer overflow bug in the ziplist data structure (CVE-2021-32628)
-
redis: Integer overflow issue with intsets (CVE-2021-32687)
-
redis: Integer overflow issue with strings (CVE-2021-41099)
-
redis: Out of bounds read in lua debugger protocol parser (CVE-2021-32672)
-
redis: Denial of service via Redis Standard Protocol (RESP) request (CVE-2021-32675)
-
helm: information disclosure vulnerability (CVE-2021-32690)
Bug fixes:
-
KUBE-API: Support move agent to different cluster in the same namespace (BZ# 1977358)
-
Add columns to the Agent CRD list (BZ# 1977398)
-
ClusterDeployment controller watches all Secrets from all namespaces (BZ# 1986081)
-
RHACM 2.3.3 images (BZ# 1999365)
-
Workaround for Network Manager not supporting nmconnections priority (BZ# 2001294)
-
create cluster page empty in Safary Browser (BZ# 2002280)
-
Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object (BZ# 2002667)
-
Overview page displays VMware based managed cluster as other (BZ# 2004188)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace 1977398 - [4.8.0] [master] Add columns to the Agent CRD list 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces 1999365 - RHACM 2.3.3 images 2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority 2002280 - create cluster page empty in Safary Browser 2002667 - Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object 2004188 - Overview page displays VMware based managed cluster as other 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-4921-1 security@debian.org https://www.debian.org/security/ Moritz Muehlenhoff May 28, 2021 https://www.debian.org/security/faq
Package : nginx CVE ID : CVE-2021-23017 Debian Bug : 989095
Luis Merino, Markus Vervier and Eric Sesterhenn discovered an off-by-one in Nginx, a high-performance web and reverse proxy server, which could result in denial of service and potentially the execution of arbitrary code.
For the stable distribution (buster), this problem has been fixed in version 1.14.2-2+deb10u4.
We recommend that you upgrade your nginx packages.
For the detailed security status of nginx please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nginx
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmCw3CMACgkQEMKTtsN8 TjYgGA/9FlgRs/kkpLxlnM5ymYDA+WAmc44BiKLajlItjdw54nifSb7WJQifSjND wWz6/1Qc2R84mgovtdReIcgEQDDmm8iCpslsWt4r/iWT5m/tlZhkLhBN1AyhW8VS u1Goqt+hFkz0fZMzv1vf9MwRkUma8SjxNcQdjs4fHzyZAfo+QoV4Ir0I7DIMKkZk N5teHqHIMaDasRZFQSpL8NuZC+JN5EEpB764mV+O/YqVrWeE9QUAnL0FgjcQUnmh iQ5AmMJRtAnQXXu9Qkpx9WtDemHLFHC9JsWEKE3TJAegA4ZhfOo5MZcjesn6EoqV 8rXAAupWzO5/wTxMeulqz4HTLeYPs+jTSONHwT1oG9kgY59jVcNVjg2DcGbG3/17 ueZdGTy70pgLSL6IKILNBgqHh0AqSyyuZmocy07DNGay+HzwuFSBq4RCCved+EPW 4CMtIPSujjPzQqvg15gFNKt/7T2ZfKFR7zVfm0itI6KTjyAhmFhaNYNwWEifX68u 8akhscDlUxmDQG1kbQ2u/IZqWeKG/TpbqaaTrTl6U+Gl1hmRO06Y4AckW1Xwm2r4 CFSO9uHeNte5Vsw+4NlDntzRZOOfJ6qW8x0XF5Vgn7R9mfYPlvIWJgptsgrrijnf lhCPw5JMpzQ4afWlRUvQiaf0lOIySKIfv05wHPtIablmgjIGny4= =qxQw -----END PGP SIGNATURE-----
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202106-1921", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise telephony fraud monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.4" }, { "model": "openresty", "scope": "lt", "trust": 1.0, "vendor": "openresty", "version": "1.19.3.2" }, { "model": "goldengate", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "21.4.0.0.0" }, { "model": "communications control plane monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.3" }, { "model": "communications control plane monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.4" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "communications fraud monitor", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "4.4" }, { "model": "communications fraud monitor", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "3.4" }, { "model": "communications operations monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.3" }, { "model": "communications operations monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.4" }, { "model": "enterprise telephony fraud monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.4" }, { "model": "nginx", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "0.6.18" }, { "model": "enterprise telephony fraud monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.2" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.0" }, { "model": "communications control plane monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.4" }, { "model": "nginx", "scope": "lt", "trust": 1.0, "vendor": "f5", "version": "1.20.1" }, { "model": "communications control plane monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.2" }, { "model": "communications operations monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.4" }, { "model": "communications operations monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.2" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.4" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.3.0" }, { "model": "enterprise session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.0" }, { "model": "blockchain platform", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "21.1.2" }, { "model": "enterprise session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.4" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "enterprise telephony fraud monitor", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "4.3" }, { "model": "oracle communications operations monitor", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "oracle enterprise telephony fraud monitor", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "oracle communications control plane monitor", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "nginx", "scope": null, "trust": 0.8, "vendor": "f5", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "openresty", "scope": null, "trust": 0.8, "vendor": "openresty", "version": null }, { "model": "oracle communications fraud monitor", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:f5:nginx:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.20.1", "versionStartIncluding": "0.6.18", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openresty:openresty:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.19.3.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:3.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:4.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:4.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_session_border_controller:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_telephony_fraud_monitor:4.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_telephony_fraud_monitor:4.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_telephony_fraud_monitor:4.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_telephony_fraud_monitor:3.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_operations_monitor:4.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_fraud_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "4.4", "versionStartIncluding": "3.4", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_control_plane_monitor:4.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_control_plane_monitor:4.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_control_plane_monitor:4.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_control_plane_monitor:3.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:goldengate:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "21.4.0.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:blockchain_platform:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "21.1.2", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23017" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "162986" }, { "db": "PACKETSTORM", "id": "162992" }, { "db": "PACKETSTORM", "id": "163013" }, { "db": "PACKETSTORM", "id": "164523" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164948" } ], "trust": 0.6 }, "cve": "CVE-2021-23017", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 6.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 6.8, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2021-23017", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 6.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-381503", "impactScore": 6.4, "integrityImpact": "PARTIAL", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "LOW", "baseScore": 7.7, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 5.5, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:L", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "Low", "baseScore": 9.4, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2021-23017", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:L", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-23017", "trust": 1.0, "value": "HIGH" }, { "author": "NVD", "id": "CVE-2021-23017", "trust": 0.8, "value": "Critical" }, { "author": "CNNVD", "id": "CNNVD-202105-1581", "trust": 0.6, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-381503", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-381503" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A security issue in nginx resolver was identified, which might allow an attacker who is able to forge UDP packets from the DNS server to cause 1-byte memory overwrite, resulting in worker process crash or potential other impact. nginx The resolver contains a vulnerability in determining boundary conditions.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server of Nginx Company in the United States. Affected products and versions are as follows: nginx: 0.6.18, 0.6.19 0.6.20, 0.6.21, 0.6.22 0.6.23, 0.6.24, 0.6.25, 0.6.26, 0.6.27, 0.6. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202105-38\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: nginx: Remote code execution\n Date: May 26, 2021\n Bugs: #792087\n ID: 202105-38\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nA vulnerability in nginx could lead to remote code execution. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 www-servers/nginx \u003c 1.21.0 \u003e= 1.20.1:0\n \u003e= 1.21.0:mainline\n\nDescription\n===========\n\nIt was discovered that nginx did not properly handle DNS responses when\n\"resolver\" directive is used. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll nginx users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=www-servers/nginx-1.20.1\"\n\nAll nginx mainline users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot -v \"\u003e=www-servers/nginx-1.21.0:mainline\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-23017\n https://nvd.nist.gov/vuln/detail/CVE-2021-23017\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202105-38\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: rh-nginx118-nginx security update\nAdvisory ID: RHSA-2021:2258-01\nProduct: Red Hat Software Collections\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2258\nIssue date: 2021-06-07\nCVE Names: CVE-2021-23017 \n=====================================================================\n\n1. Summary:\n\nAn update for rh-nginx118-nginx is now available for Red Hat Software\nCollections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.7) - ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - x86_64\n\n3. Description:\n\nnginx is a web and proxy server supporting HTTP and other protocols, with a\nfocus on high concurrency, performance, and low memory usage. \n\nSecurity Fix(es):\n\n* nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a\npointer to a root domain name (CVE-2021-23017)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe rh-nginx118-nginx service must be restarted for this update to take\neffect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nginx118-nginx-1.18.0-3.el7.src.rpm\n\nppc64le:\nrh-nginx118-nginx-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.ppc64le.rpm\n\ns390x:\nrh-nginx118-nginx-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.s390x.rpm\n\nx86_64:\nrh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.7):\n\nSource:\nrh-nginx118-nginx-1.18.0-3.el7.src.rpm\n\nppc64le:\nrh-nginx118-nginx-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.ppc64le.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.ppc64le.rpm\n\ns390x:\nrh-nginx118-nginx-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.s390x.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.s390x.rpm\n\nx86_64:\nrh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nginx118-nginx-1.18.0-3.el7.src.rpm\n\nx86_64:\nrh-nginx118-nginx-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-debuginfo-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-image-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-perl-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-http-xslt-filter-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-mail-1.18.0-3.el7.x86_64.rpm\nrh-nginx118-nginx-mod-stream-1.18.0-3.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23017\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYL3MN9zjgjWX9erEAQjMKA//YaSwGZ/DmvwILuYqYNbIGKvcatycisD6\nRrS+A7J9QqTEKqC8mZQ/OvfS5TukanQ/jzTNfRuGuO7booPRlhqVxZVLrSgQNaVD\n1FV/cQqXhS/FwmrM8wnWdLpsFUXRXsTqiOoUnymzZbSh1VDjB8VZZLjWc7Wnueqy\nclLQnYtwMT5axzXRJl/JiXs+yJBmzv5igSFMoGXEKDx6DTrWGtZENE1rpumPAjb6\nY3aDzDZYu4Bl9V1FCUOtksWnmP0Xl/kvSL31aUkyYbyi9i0DpQswmdBH4Bl5ulw2\nskkKH69ixA1wu+2D128toUy2ZR/MjX88sH3bCahhY1G4ajp0Vl3/p/kM7VVR5uRi\nKTVNK8FueNIvp8fMp8oYKhZW9It5DzlMa0Q1QcFfsutgf+932up8qJ9o0mQ9AbVK\nfBYb8F0hYMDI8udy+npgUM0WwwiBQAqzcHmbnYIRt6IK5f/dUOqucugiJFsbyTl2\npIcJty1208RbrDE/ctTcKuyVbHH9pPOHql5rFlJLAh7yYdHWh6J1QhmdA1RNm51h\nMEgO5OOVUjrV2mye1c8o7EkTzvuhu2RWQ7WyQc6C81ZlcUcjfNnq73vJ9HBNtNT5\nhsiDG/UdvY/thIQmqzSFI3z8ALFKPRUcJ91v/fZNRpBTxcsluN91X7XrHIQDNOs9\njVrMgzAG88I=\n=av6T\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8.2) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.3 General\nAvailability release images, which fix bugs, provide security fixes, and\nupdate container images. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with\nsecurity policy built in. See the following Release Notes documentation, which will be\nupdated shortly for this release, for additional details about this\nrelease:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nNote: Because Red Hat OpenShift Container Platform version 4.9 was just\nreleased, the functional testing of the compatibility between Red Hat\nAdvanced Cluster Management 2.3.3 and Red Hat OpenShift Container Platform\nversion 4.9 is still in progress. \n\nSecurity fixes: \n\n* nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a\npointer to a root domain name (CVE-2021-23017)\n\n* redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)\n\n* redis: Integer overflow issue with Streams (CVE-2021-32627)\n\n* redis: Integer overflow bug in the ziplist data structure\n(CVE-2021-32628)\n\n* redis: Integer overflow issue with intsets (CVE-2021-32687)\n\n* redis: Integer overflow issue with strings (CVE-2021-41099)\n\n* redis: Out of bounds read in lua debugger protocol parser\n(CVE-2021-32672)\n\n* redis: Denial of service via Redis Standard Protocol (RESP) request\n(CVE-2021-32675)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\nBug fixes:\n\n* KUBE-API: Support move agent to different cluster in the same namespace\n(BZ# 1977358)\n\n* Add columns to the Agent CRD list (BZ# 1977398)\n\n* ClusterDeployment controller watches all Secrets from all namespaces (BZ#\n1986081)\n\n* RHACM 2.3.3 images (BZ# 1999365)\n\n* Workaround for Network Manager not supporting nmconnections priority (BZ#\n2001294)\n\n* create cluster page empty in Safary Browser (BZ# 2002280)\n\n* Compliance state doesn\u0027t get updated after fixing the issue causing\ninitially the policy not being able to update the managed object (BZ#\n2002667)\n\n* Overview page displays VMware based managed cluster as other (BZ#\n2004188)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace\n1977398 - [4.8.0] [master] Add columns to the Agent CRD list\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces\n1999365 - RHACM 2.3.3 images\n2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority\n2002280 - create cluster page empty in Safary Browser\n2002667 - Compliance state doesn\u0027t get updated after fixing the issue causing initially the policy not being able to update the managed object\n2004188 - Overview page displays VMware based managed cluster as other\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4921-1 security@debian.org\nhttps://www.debian.org/security/ Moritz Muehlenhoff\nMay 28, 2021 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nginx\nCVE ID : CVE-2021-23017\nDebian Bug : 989095\n\nLuis Merino, Markus Vervier and Eric Sesterhenn discovered an off-by-one\nin Nginx, a high-performance web and reverse proxy server, which could\nresult in denial of service and potentially the execution of arbitrary\ncode. \n\nFor the stable distribution (buster), this problem has been fixed in\nversion 1.14.2-2+deb10u4. \n\nWe recommend that you upgrade your nginx packages. \n\nFor the detailed security status of nginx please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nginx\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmCw3CMACgkQEMKTtsN8\nTjYgGA/9FlgRs/kkpLxlnM5ymYDA+WAmc44BiKLajlItjdw54nifSb7WJQifSjND\nwWz6/1Qc2R84mgovtdReIcgEQDDmm8iCpslsWt4r/iWT5m/tlZhkLhBN1AyhW8VS\nu1Goqt+hFkz0fZMzv1vf9MwRkUma8SjxNcQdjs4fHzyZAfo+QoV4Ir0I7DIMKkZk\nN5teHqHIMaDasRZFQSpL8NuZC+JN5EEpB764mV+O/YqVrWeE9QUAnL0FgjcQUnmh\niQ5AmMJRtAnQXXu9Qkpx9WtDemHLFHC9JsWEKE3TJAegA4ZhfOo5MZcjesn6EoqV\n8rXAAupWzO5/wTxMeulqz4HTLeYPs+jTSONHwT1oG9kgY59jVcNVjg2DcGbG3/17\nueZdGTy70pgLSL6IKILNBgqHh0AqSyyuZmocy07DNGay+HzwuFSBq4RCCved+EPW\n4CMtIPSujjPzQqvg15gFNKt/7T2ZfKFR7zVfm0itI6KTjyAhmFhaNYNwWEifX68u\n8akhscDlUxmDQG1kbQ2u/IZqWeKG/TpbqaaTrTl6U+Gl1hmRO06Y4AckW1Xwm2r4\nCFSO9uHeNte5Vsw+4NlDntzRZOOfJ6qW8x0XF5Vgn7R9mfYPlvIWJgptsgrrijnf\nlhCPw5JMpzQ4afWlRUvQiaf0lOIySKIfv05wHPtIablmgjIGny4=\n=qxQw\n-----END PGP SIGNATURE-----\n", "sources": [ { "db": "NVD", "id": "CVE-2021-23017" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "VULHUB", "id": "VHN-381503" }, { "db": "PACKETSTORM", "id": "162835" }, { "db": "PACKETSTORM", "id": "162986" }, { "db": "PACKETSTORM", "id": "162992" }, { "db": "PACKETSTORM", "id": "163013" }, { "db": "PACKETSTORM", "id": "164523" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "169062" } ], "trust": 2.43 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-381503", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-381503" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-23017", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "167720", "trust": 1.7 }, { "db": "PACKETSTORM", "id": "163013", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162835", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "164948", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-007625", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162830", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "165782", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "162851", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163003", "trust": 0.7 }, { "db": "EXPLOIT-DB", "id": "50973", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164523", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164562", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164282", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021052543", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022041931", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021092811", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071833", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021052901", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060212", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021100722", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022012302", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021052713", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060719", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060948", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061520", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022012747", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021062209", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3878", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1850", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3485", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1936", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1802", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3211", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3430", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1861", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1817", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2027", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1973", "trust": 0.6 }, { "db": "CXSECURITY", "id": "WLB-2022070032", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202105-1581", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "162992", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162986", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162819", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-381503", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169062", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381503" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "PACKETSTORM", "id": "162835" }, { "db": "PACKETSTORM", "id": "162986" }, { "db": "PACKETSTORM", "id": "162992" }, { "db": "PACKETSTORM", "id": "163013" }, { "db": "PACKETSTORM", "id": "164523" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "169062" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "id": "VAR-202106-1921", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-381503" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:25:59.461000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Oracle\u00a0Critical\u00a0Patch\u00a0Update\u00a0Advisory\u00a0-\u00a0October\u00a02021 Oracle\u00a0Critical\u00a0Patch\u00a0Update", "trust": 0.8, "url": "https://support.f5.com/csp/article/k12331123" }, { "title": "Nginx Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=154683" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-193", "trust": 1.1 }, { "problemtype": "Boundary condition judgment (CWE-193) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381503" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.3, "url": "http://packetstormsecurity.com/files/167720/nginx-1.20.0-denial-of-service.html" }, { "trust": 2.3, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 2.3, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20210708-0006/" }, { "trust": 1.7, "url": "http://mailman.nginx.org/pipermail/nginx-announce/2021/000300.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujan2022.html" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r37e6b2165f7c910d8e15fd54f4697857619ad2625f56583802004009%40%3cnotifications.apisix.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r4d4966221ca399ce948ef34884652265729d7d9ef8179c78d7f17e7f%40%3cnotifications.apisix.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r6fc5c57b38e93e36213e9a18c8a4e5dbd5ced1c7e57f08a1735975ba%40%3cnotifications.apisix.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rf232eecd47fdc44520192810560303073cefd684b321f85e311bad31%40%3cnotifications.apisix.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rf318aeeb4d7a3a312734780b47de83cefb7e6995da0b2cae5c28675c%40%3cnotifications.apisix.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/7sfvyhc7oxteo4smbwxdvk6e5imeymee/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/gnkop2jr5l7kciztjrzdcupjtuonmc5i/" }, { "trust": 1.0, "url": "https://support.f5.com/csp/article/k12331123%2c" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/7sfvyhc7oxteo4smbwxdvk6e5imeymee/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/gnkop2jr5l7kciztjrzdcupjtuonmc5i/" }, { "trust": 0.7, "url": "https://lists.apache.org/thread.html/r6fc5c57b38e93e36213e9a18c8a4e5dbd5ced1c7e57f08a1735975ba@%3cnotifications.apisix.apache.org%3e" }, { "trust": 0.7, "url": "https://lists.apache.org/thread.html/r37e6b2165f7c910d8e15fd54f4697857619ad2625f56583802004009@%3cnotifications.apisix.apache.org%3e" }, { "trust": 0.7, "url": "https://lists.apache.org/thread.html/r4d4966221ca399ce948ef34884652265729d7d9ef8179c78d7f17e7f@%3cnotifications.apisix.apache.org%3e" }, { "trust": 0.7, "url": "https://lists.apache.org/thread.html/rf318aeeb4d7a3a312734780b47de83cefb7e6995da0b2cae5c28675c@%3cnotifications.apisix.apache.org%3e" }, { "trust": 0.7, "url": "https://lists.apache.org/thread.html/rf232eecd47fdc44520192810560303073cefd684b321f85e311bad31@%3cnotifications.apisix.apache.org%3e" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-23017" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://support.f5.com/csp/article/k12331123" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021052713" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163003/red-hat-security-advisory-2021-2278-01.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/nginx-buffer-overflow-via-dns-server-response-35526" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164282/red-hat-security-advisory-2021-3653-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6492205" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022041931" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1802" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-f5-nginx-controller-affect-ibm-cloud-pak-for-automation/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162851/ubuntu-security-notice-usn-4967-2.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060719" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3211" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164523/red-hat-security-advisory-2021-3873-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021100722" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3430" }, { "trust": 0.6, "url": "https://cxsecurity.com/issue/wlb-2022070032" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2027" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1850" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6483657" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162835/gentoo-linux-security-advisory-202105-38.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021052901" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071833" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021052543" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060948" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1817" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3878" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021062209" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1973" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1936" }, { "trust": 0.6, "url": "https://www.exploit-db.com/exploits/50973" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164948/red-hat-security-advisory-2021-4618-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022012302" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163013/red-hat-security-advisory-2021-2290-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021092811" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3485" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061520" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1861" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6525030" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022012747" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162830/nginx-1.20.0-dns-resolver-off-by-one-heap-write.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164562/red-hat-security-advisory-2021-3925-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165782/red-hat-security-advisory-2022-0323-02.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060212" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32626" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32687" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22922" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22924" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32675" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-41099" }, { "trust": 0.3, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32627" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32672" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22923" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-32628" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41099" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3653" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32690" }, { "trust": 0.1, "url": "https://support.f5.com/csp/article/k12331123," }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202105-38" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2258" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2259" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2290" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23434" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3873" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25741" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21671" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21671" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25741" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37576" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32803" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4618" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36385" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32804" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32804" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3711" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32803" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nginx" } ], "sources": [ { "db": "VULHUB", "id": "VHN-381503" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "PACKETSTORM", "id": "162835" }, { "db": "PACKETSTORM", "id": "162986" }, { "db": "PACKETSTORM", "id": "162992" }, { "db": "PACKETSTORM", "id": "163013" }, { "db": "PACKETSTORM", "id": "164523" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "169062" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-381503" }, { "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "db": "PACKETSTORM", "id": "162835" }, { "db": "PACKETSTORM", "id": "162986" }, { "db": "PACKETSTORM", "id": "162992" }, { "db": "PACKETSTORM", "id": "163013" }, { "db": "PACKETSTORM", "id": "164523" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "PACKETSTORM", "id": "169062" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "db": "NVD", "id": "CVE-2021-23017" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-06-01T00:00:00", "db": "VULHUB", "id": "VHN-381503" }, { "date": "2022-02-18T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "date": "2021-05-27T13:28:42", "db": "PACKETSTORM", "id": "162835" }, { "date": "2021-06-07T13:45:14", "db": "PACKETSTORM", "id": "162986" }, { "date": "2021-06-07T13:50:43", "db": "PACKETSTORM", "id": "162992" }, { "date": "2021-06-08T14:13:55", "db": "PACKETSTORM", "id": "163013" }, { "date": "2021-10-15T15:06:44", "db": "PACKETSTORM", "id": "164523" }, { "date": "2021-10-20T15:45:47", "db": "PACKETSTORM", "id": "164562" }, { "date": "2021-11-12T17:01:04", "db": "PACKETSTORM", "id": "164948" }, { "date": "2021-05-28T19:12:00", "db": "PACKETSTORM", "id": "169062" }, { "date": "2021-05-25T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "date": "2021-06-01T13:15:07.853000", "db": "NVD", "id": "CVE-2021-23017" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-14T00:00:00", "db": "VULHUB", "id": "VHN-381503" }, { "date": "2022-02-18T01:21:00", "db": "JVNDB", "id": "JVNDB-2021-007625" }, { "date": "2022-09-15T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1581" }, { "date": "2023-11-07T03:30:29.880000", "db": "NVD", "id": "CVE-2021-23017" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "162835" }, { "db": "CNNVD", "id": "CNNVD-202105-1581" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "nginx\u00a0 Vulnerability in determining boundary conditions in resolver", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-007625" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1581" } ], "trust": 0.6 } }
var-202103-1464
Vulnerability from variot
An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. Exploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. This advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
- NooBaa: noobaa-operator leaking RPC AuthToken into log files (CVE-2021-3528)
For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Currently, a newly restored PVC cannot be mounted if some of the OpenShift Container Platform nodes are running on a version of Red Hat Enterprise Linux which is less than 8.2, and the snapshot from which the PVC was restored is deleted. Workaround: Do not delete the snapshot from which the PVC was restored until the restored PVC is deleted. (BZ#1962483)
-
Previously, the default backingstore was not created on AWS S3 when OpenShift Container Storage was deployed, due to incorrect identification of AWS S3. With this update, the default backingstore gets created when OpenShift Container Storage is deployed on AWS S3. (BZ#1927307)
-
Previously, log messages were printed to the endpoint pod log even if the debug option was not set. With this update, the log messages are printed to the endpoint pod log only when the debug option is set. (BZ#1938106)
-
Previously, the PVCs could not be provisioned as the
rook-ceph-mds
did not register the pod IP on the monitor servers, and hence every mount on the filesystem timed out, resulting in CephFS volume provisioning failure. With this update, an argument--public-addr=podIP
is added to the MDS pod when the host network is not enabled, and hence the CephFS volume provisioning does not fail. (BZ#1949558) -
Previously, OpenShift Container Storage 4.2 clusters were not updated with the correct cache value, and hence MDSs in standby-replay might report an oversized cache, as rook did not apply the
mds_cache_memory_limit
argument during upgrades. With this update, themds_cache_memory_limit
argument is applied during upgrades and the mds daemon operates normally. (BZ#1951348) -
Previously, the coredumps were not generated in the correct location as rook was setting the config option
log_file
to an empty string since logging happened on stdout and not on the files, and hence Ceph read the value of thelog_file
to build the dump path. With this update, rook does not set thelog_file
and keeps Ceph's internal default, and hence the coredumps are generated in the correct location and are accessible under/var/log/ceph/
. (BZ#1938049) -
Previously, Ceph became inaccessible, as the mons lose quorum if a mon pod was drained while another mon was failing over. With this update, voluntary mon drains are prevented while a mon is failing over, and hence Ceph does not become inaccessible. (BZ#1946573)
-
Previously, the mon quorum was at risk, as the operator could erroneously remove the new mon if the operator was restarted during a mon failover. With this update, the operator completes the same mon failover after the operator is restarted, and hence the mon quorum is more reliable in the node drains and mon failover scenarios. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod 1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b 1951348 - [GSS][CephFS] health warning "MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files" for the standby-replay 1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore 1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files 1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version] 1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover 1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout 1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod
Bug Fix(es):
-
WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)
-
LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath (BZ#1952917)
-
Telemetry info not completely available to identify windows nodes (BZ#1955319)
-
WMCO incorrectly shows node as ready after a failed configuration (BZ#1956412)
-
kube-proxy service terminated unexpectedly after recreated LB service (BZ#1963263)
-
Solution:
For Windows Machine Config Operator upgrades, see the following documentation:
https://docs.openshift.com/container-platform/4.7/windows_containers/window s-node-upgrades.html
- Bugs fixed (https://bugzilla.redhat.com/):
1945248 - WMCO patch pub-key-hash annotation to Linux node 1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don't create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM 1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath 1955319 - Telemetry info not completely available to identify windows nodes 1956412 - WMCO incorrectly shows node as ready after a failed configuration 1963263 - kube-proxy service terminated unexpectedly after recreated LB service
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience.
This release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release. Solution:
Before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.
The References section of this erratum contains a download link for the update. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):
1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT 1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing
- ========================================================================== Ubuntu Security Notice USN-5038-1 August 12, 2021
postgresql-10, postgresql-12, postgresql-13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.04
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in PostgreSQL.
Software Description: - postgresql-13: Object-relational SQL database - postgresql-12: Object-relational SQL database - postgresql-10: Object-relational SQL database
Details:
It was discovered that the PostgresQL planner could create incorrect plans in certain circumstances. A remote attacker could use this issue to cause PostgreSQL to crash, resulting in a denial of service, or possibly obtain sensitive information from memory. (CVE-2021-3677)
It was discovered that PostgreSQL incorrectly handled certain SSL renegotiation ClientHello messages from clients. A remote attacker could possibly use this issue to cause PostgreSQL to crash, resulting in a denial of service. (CVE-2021-3449)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.04: postgresql-13 13.4-0ubuntu0.21.04.1
Ubuntu 20.04 LTS: postgresql-12 12.8-0ubuntu0.20.04.1
Ubuntu 18.04 LTS: postgresql-10 10.18-0ubuntu0.18.04.1
This update uses a new upstream release, which includes additional bug fixes. After a standard system update you need to restart PostgreSQL to make all the necessary changes.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
5. OpenSSL Security Advisory [25 March 2021]
CA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)
Severity: High
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default.
Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check.
An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates.
If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application.
In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose.
This issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk from Akamai and was discovered by Xiang Ding and others at Akamai. The fix was developed by Tomáš Mráz.
This issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was developed by Peter Kästle and Samuel Sapalski from Nokia.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html
OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20210325.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202103-1464", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.13.1" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "scalance s627-2m", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "sonicos", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "7.0.1.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "12.12.0" }, { "model": "scalance xr-300wg", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xp-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.0.0" }, { "model": "sinamics connect 300", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xc-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic net cp 1543-1", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "simatic net cp 1542sp-1 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.13.0" }, { "model": "scalance xr526-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "simatic net cp1243-7 lte us", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance w700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.5" }, { "model": "sinec infrastructure network services", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "scalance xr552-12", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "simatic mv500", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance xm-400", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "scalance s615", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "tia administrator", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "log correlation engine", "scope": "lt", "trust": 1.0, "vendor": "tenable", "version": "6.0.9" }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "capture client", "scope": "eq", "trust": 1.0, "vendor": "sonicwall", "version": "3.5" }, { "model": "simatic cloud connect 7", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic pcs neo", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic rf186c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1212c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic cp 1242-7 gprs v2", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic s7-1200 cpu 1215 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic logon", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.6.0.2" }, { "model": "freebsd", "scope": "eq", "trust": 1.0, "vendor": "freebsd", "version": "12.2" }, { "model": "simatic logon", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.5" }, { "model": "simatic hmi basic panels 2nd generation", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.16.1" }, { "model": "primavera unifier", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12" }, { "model": "simatic rf188ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec nms", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "scalance xr524-8c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "tim 1531 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "sinec pni", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "storagegrid", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "12.22.1" }, { "model": "communications communications policy management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.6.0.0.0" }, { "model": "scalance s612", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic hmi ktp mobile panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "sinumerik opc ua server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "e-series performance analyzer", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "simatic cp 1242-7 gprs v2", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "tenable.sc", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "5.17.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.24.0" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "10.12.0" }, { "model": "simatic wincc runtime advanced", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic process historian opc ua server", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2019" }, { "model": "simatic net cp 1545-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ruggedcom rcm1224", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "8.2.19" }, { "model": "simatic cloud connect 7", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "1.1" }, { "model": "scalance xr528-6m", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "6.4" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "cloud volumes ontap mediator", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "multi-domain management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r81" }, { "model": "scalance xf-200ba", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "simatic s7-1500 cpu 1518-4 pn\\/dp mfp", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.12" }, { "model": "scalance m-800", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "6.2" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1k" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance lpe9403", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic hmi comfort outdoor panels", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance sc-600", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "quantum security management", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "sma100", "scope": "lt", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.1.0-17sv" }, { "model": "simatic rf186ci", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.12" }, { "model": "sinema server", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "14.0" }, { "model": "simatic net cp 1243-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic s7-1200 cpu 1217c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "primavera unifier", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.7" }, { "model": "simatic rf185c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tenable.sc", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.12" }, { "model": "simatic s7-1200 cpu 1211c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "scalance s623", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "simatic s7-1200 cpu 1215c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "9.2.10" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "simatic rf166c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "tim 1531 irc", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "10.13.0" }, { "model": "simatic net cp 1543sp-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.1" }, { "model": "web gateway", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "scalance w1700", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "simatic s7-1200 cpu 1212fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic net cp1243-7 lte eu", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic wincc telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "simatic net cp 1243-8 irc", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "3.1" }, { "model": "simatic pcs 7 telecontrol", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "simatic rf360r", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.0" }, { "model": "sma100", "scope": "gte", "trust": 1.0, "vendor": "sonicwall", "version": "10.2.0.0" }, { "model": "scalance xb-200", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "4.3" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "scalance s602", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "4.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "12.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" }, { "model": "quantum security gateway", "scope": "eq", "trust": 1.0, "vendor": "checkpoint", "version": "r80.40" }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.6" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "mysql connectors", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "secure backup", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "18.1.0.1.0" }, { "model": "simatic net cp 1543-1", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.2" }, { "model": "simatic s7-1200 cpu 1214 fc", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "web gateway cloud service", "scope": "eq", "trust": 1.0, "vendor": "mcafee", "version": "10.1.1" }, { "model": "simatic rf188c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "simatic s7-1200 cpu 1214c", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "*" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "simatic pdm", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "9.1.0.7" }, { "model": "hitachi ops center analyzer viewpoint", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "storagegrid", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "quantum security gateway", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "tenable.sc", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "nessus", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "oncommand workflow automation", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "freebsd", "scope": null, "trust": 0.8, "vendor": "freebsd", "version": null }, { "model": "hitachi ops center common services", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "santricity smi-s provider", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "mcafee web gateway \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "e-series performance analyzer", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "jp1/file transmission server/ftp", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "quantum security management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null }, { "model": "cloud volumes ontap \u30e1\u30c7\u30a3\u30a8\u30fc\u30bf", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/base", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "web gateway cloud service", "scope": null, "trust": 0.8, "vendor": "\u30de\u30ab\u30d5\u30a3\u30fc", "version": null }, { "model": "multi-domain management", "scope": null, "trust": 0.8, "vendor": "\u30c1\u30a7\u30c3\u30af \u30dd\u30a4\u30f3\u30c8 \u30bd\u30d5\u30c8\u30a6\u30a7\u30a2 \u30c6\u30af\u30ce\u30ed\u30b8\u30fc\u30ba", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1k", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:p2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:freebsd:freebsd:12.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_volumes_ontap_mediator:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:e-series_performance_analyzer:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:tenable.sc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.17.0", "versionStartIncluding": "5.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.13.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:log_correlation_engine:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.0.9", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway_cloud_service:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:10.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:9.2.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:8.2.19:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:multi-domain_management_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:multi-domain_management:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r80.40:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:checkpoint:quantum_security_gateway_firmware:r81:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:checkpoint:quantum_security_gateway:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12", "versionStartIncluding": "17.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.6:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_connectors:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:21.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_backup:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.1.0.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_communications_policy_management:12.6.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:sonicwall:sma100_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.2.1.0-17sv", "versionStartIncluding": "10.2.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:sonicwall:sma100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:sonicwall:capture_client:3.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:sonicwall:sonicos:7.0.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rcm1224_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rcm1224:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_lpe9403_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_lpe9403:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_m-800_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_m-800:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s602_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s602:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s612_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s612:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s615_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s615:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s623_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s623:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_s627-2m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "4.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_s627-2m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc-600_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc-600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "6.5", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_w1700_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_w1700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xb-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xb-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xc-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xc-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xf-200ba_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xf-200ba:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xm-400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xm-400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xp-200_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xp-200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr-300wg_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr-300wg:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr524-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr524-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr526-8c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr526-8c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr528-6m_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr528-6m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_xr552-12_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.4", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_xr552-12:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cloud_connect_7_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cloud_connect_7:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:siemens:simatic_cp_1242-7_gprs_v2_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_cp_1242-7_gprs_v2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_basic_panels_2nd_generation_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_basic_panels_2nd_generation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_comfort_outdoor_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_comfort_outdoor_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_hmi_ktp_mobile_panels_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_hmi_ktp_mobile_panels:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_mv500_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_mv500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_eu_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_eu:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp1243-7_lte_us_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp1243-7_lte_us:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1243-8_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "3.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1243-8_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1542sp-1_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1542sp-1_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "versionStartIncluding": "2.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1543sp-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1543sp-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_net_cp_1545-1_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_net_cp_1545-1:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_7_telecontrol_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_7_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pcs_neo_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pcs_neo:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_pdm_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "9.1.0.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_pdm:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_process_historian_opc_ua_server_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "2019", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_process_historian_opc_ua_server:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf166c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf166c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf185c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf185c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf186ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf186ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf188ci_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf188ci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_rf360r_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_rf360r:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1211c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1211c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1212fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1212fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1214_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1214_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215_fc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215_fc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1215c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1215c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1200_cpu_1217c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1200_cpu_1217c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:simatic_s7-1500_cpu_1518-4_pn\\/dp_mfp:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:sinamics_connect_300_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:sinamics_connect_300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:tim_1531_irc_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.2", "versionStartIncluding": "2.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:tim_1531_irc:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_runtime_advanced:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "1.6.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_logon:1.5:sp3_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:simatic_wincc_telecontrol:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_nms:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_pni:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:tia_administrator:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinema_server:14.0:sp2_update2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinumerik_opc_ua_server:*:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_infrastructure_network_services:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "10.12.0", "versionStartIncluding": "10.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "12.12.0", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.16.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "12.22.1", "versionStartIncluding": "12.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndIncluding": "10.24.0", "versionStartIncluding": "10.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3449" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "164192" } ], "trust": 0.7 }, "cve": "CVE-2021-3449", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-388130", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3449", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3449", "trust": 1.8, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-388130", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-3449", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. Fixed in OpenSSL 1.1.1k (Affected 1.1.1-1.1.1j). OpenSSL is an open source general encryption library of the Openssl team that can implement the Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLSv1) protocols. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. On March 25, 2021, the OpenSSL Project released a security advisory, OpenSSL Security Advisory [25 March 2021], that disclosed two vulnerabilities. \nExploitation of these vulnerabilities could allow an malicious user to use a valid non-certificate authority (CA) certificate to act as a CA and sign a certificate for an arbitrary organization, user or device, or to cause a denial of service (DoS) condition. \nThis advisory is available at the following link:tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-openssl-2021-GHY28dJd. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nSecurity Fix(es):\n\n* NooBaa: noobaa-operator leaking RPC AuthToken into log files\n(CVE-2021-3528)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. \n\nBug Fix(es):\n\n* Currently, a newly restored PVC cannot be mounted if some of the\nOpenShift Container Platform nodes are running on a version of Red Hat\nEnterprise Linux which is less than 8.2, and the snapshot from which the\nPVC was restored is deleted. \nWorkaround: Do not delete the snapshot from which the PVC was restored\nuntil the restored PVC is deleted. (BZ#1962483)\n\n* Previously, the default backingstore was not created on AWS S3 when\nOpenShift Container Storage was deployed, due to incorrect identification\nof AWS S3. With this update, the default backingstore gets created when\nOpenShift Container Storage is deployed on AWS S3. (BZ#1927307)\n\n* Previously, log messages were printed to the endpoint pod log even if the\ndebug option was not set. With this update, the log messages are printed to\nthe endpoint pod log only when the debug option is set. (BZ#1938106)\n\n* Previously, the PVCs could not be provisioned as the `rook-ceph-mds` did\nnot register the pod IP on the monitor servers, and hence every mount on\nthe filesystem timed out, resulting in CephFS volume provisioning failure. \nWith this update, an argument `--public-addr=podIP` is added to the MDS pod\nwhen the host network is not enabled, and hence the CephFS volume\nprovisioning does not fail. (BZ#1949558)\n\n* Previously, OpenShift Container Storage 4.2 clusters were not updated\nwith the correct cache value, and hence MDSs in standby-replay might report\nan oversized cache, as rook did not apply the `mds_cache_memory_limit`\nargument during upgrades. With this update, the `mds_cache_memory_limit`\nargument is applied during upgrades and the mds daemon operates normally. \n(BZ#1951348)\n\n* Previously, the coredumps were not generated in the correct location as\nrook was setting the config option `log_file` to an empty string since\nlogging happened on stdout and not on the files, and hence Ceph read the\nvalue of the `log_file` to build the dump path. With this update, rook does\nnot set the `log_file` and keeps Ceph\u0027s internal default, and hence the\ncoredumps are generated in the correct location and are accessible under\n`/var/log/ceph/`. (BZ#1938049)\n\n* Previously, Ceph became inaccessible, as the mons lose quorum if a mon\npod was drained while another mon was failing over. With this update,\nvoluntary mon drains are prevented while a mon is failing over, and hence\nCeph does not become inaccessible. (BZ#1946573)\n\n* Previously, the mon quorum was at risk, as the operator could erroneously\nremove the new mon if the operator was restarted during a mon failover. \nWith this update, the operator completes the same mon failover after the\noperator is restarted, and hence the mon quorum is more reliable in the\nnode drains and mon failover scenarios. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1938106 - [GSS][RFE]Reduce debug level for logs of Nooba Endpoint pod\n1950915 - XSS Vulnerability with Noobaa version 5.5.0-3bacc6b\n1951348 - [GSS][CephFS] health warning \"MDS cache is too large (3GB/1GB); 0 inodes in use by clients, 0 stray files\" for the standby-replay\n1951600 - [4.6.z][Clone of BZ #1936545] setuid and setgid file bits are not retained after a OCS CephFS CSI restore\n1955601 - CVE-2021-3528 NooBaa: noobaa-operator leaking RPC AuthToken into log files\n1957189 - [Rebase] Use RHCS4.2z1 container image with OCS 4..6.5[may require doc update for external mode min supported RHCS version]\n1959980 - When a node is being drained, increase the mon failover timeout to prevent unnecessary mon failover\n1959983 - [GSS][mon] rook-operator scales mons to 4 after healthCheck timeout\n1962483 - [RHEL7][RBD][4.6.z clone] FailedMount error when using restored PVC on app pod\n\n5. \n\nBug Fix(es):\n\n* WMCO patch pub-key-hash annotation to Linux node (BZ#1945248)\n\n* LoadBalancer Service type with invalid external loadbalancer IP breaks\nthe datapath (BZ#1952917)\n\n* Telemetry info not completely available to identify windows nodes\n(BZ#1955319)\n\n* WMCO incorrectly shows node as ready after a failed configuration\n(BZ#1956412)\n\n* kube-proxy service terminated unexpectedly after recreated LB service\n(BZ#1963263)\n\n3. Solution:\n\nFor Windows Machine Config Operator upgrades, see the following\ndocumentation:\n\nhttps://docs.openshift.com/container-platform/4.7/windows_containers/window\ns-node-upgrades.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1945248 - WMCO patch pub-key-hash annotation to Linux node\n1946538 - CVE-2021-25736 kubernetes: LoadBalancer Service type don\u0027t create a HNS policy for empty or invalid external loadbalancer IP, what could lead to MITM\n1952917 - LoadBalancer Service type with invalid external loadbalancer IP breaks the datapath\n1955319 - Telemetry info not completely available to identify windows nodes\n1956412 - WMCO incorrectly shows node as ready after a failed configuration\n1963263 - kube-proxy service terminated unexpectedly after recreated LB service\n\n5. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. \n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 7 packages\nthat are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most\nsignificant bug fixes and enhancements included in this release. Solution:\n\nBefore applying the update, back up your existing installation, including\nall applications, configuration files, databases and database settings, and\nso on. \n\nThe References section of this erratum contains a download link for the\nupdate. You must be logged in to download the update. Bugs fixed (https://bugzilla.redhat.com/):\n\n1941547 - CVE-2021-3450 openssl: CA certificate check bypass with X509_V_FLAG_X509_STRICT\n1941554 - CVE-2021-3449 openssl: NULL pointer dereference in signature_algorithms processing\n\n5. ==========================================================================\nUbuntu Security Notice USN-5038-1\nAugust 12, 2021\n\npostgresql-10, postgresql-12, postgresql-13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.04\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in PostgreSQL. \n\nSoftware Description:\n- postgresql-13: Object-relational SQL database\n- postgresql-12: Object-relational SQL database\n- postgresql-10: Object-relational SQL database\n\nDetails:\n\nIt was discovered that the PostgresQL planner could create incorrect plans\nin certain circumstances. A remote attacker could use this issue to cause\nPostgreSQL to crash, resulting in a denial of service, or possibly obtain\nsensitive information from memory. (CVE-2021-3677)\n\nIt was discovered that PostgreSQL incorrectly handled certain SSL\nrenegotiation ClientHello messages from clients. A remote attacker could\npossibly use this issue to cause PostgreSQL to crash, resulting in a denial\nof service. (CVE-2021-3449)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.04:\n postgresql-13 13.4-0ubuntu0.21.04.1\n\nUbuntu 20.04 LTS:\n postgresql-12 12.8-0ubuntu0.20.04.1\n\nUbuntu 18.04 LTS:\n postgresql-10 10.18-0ubuntu0.18.04.1\n\nThis update uses a new upstream release, which includes additional bug\nfixes. After a standard system update you need to restart PostgreSQL to\nmake all the necessary changes. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5. OpenSSL Security Advisory [25 March 2021]\n=========================================\n\nCA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450)\n========================================================================\n\nSeverity: High\n\nThe X509_V_FLAG_X509_STRICT flag enables additional security checks of the\ncertificates present in a certificate chain. It is not set by default. \n\nStarting from OpenSSL version 1.1.1h a check to disallow certificates in\nthe chain that have explicitly encoded elliptic curve parameters was added\nas an additional strict check. \n\nAn error in the implementation of this check meant that the result of a\nprevious check to confirm that certificates in the chain are valid CA\ncertificates was overwritten. This effectively bypasses the check\nthat non-CA certificates must not be able to issue other certificates. \n\nIf a \"purpose\" has been configured then there is a subsequent opportunity\nfor checks that the certificate is a valid CA. All of the named \"purpose\"\nvalues implemented in libcrypto perform this check. Therefore, where\na purpose is set the certificate chain will still be rejected even when the\nstrict flag has been used. A purpose is set by default in libssl client and\nserver certificate verification routines, but it can be overridden or\nremoved by an application. \n\nIn order to be affected, an application must explicitly set the\nX509_V_FLAG_X509_STRICT verification flag and either not set a purpose\nfor the certificate verification or, in the case of TLS client or server\napplications, override the default purpose. \n\nThis issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk\nfrom Akamai and was discovered by Xiang Ding and others at Akamai. The fix was\ndeveloped by Tom\u00e1\u0161 Mr\u00e1z. \n\nThis issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was\ndeveloped by Peter K\u00e4stle and Samuel Sapalski from Nokia. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended\nsupport is available for premium support customers:\nhttps://www.openssl.org/support/contracts.html\n\nOpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20210325.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" } ], "trust": 2.61 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3449", "trust": 2.9 }, { "db": "TENABLE", "id": "TNS-2021-06", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-05", "trust": 1.2 }, { "db": "TENABLE", "id": "TNS-2021-10", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/3", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/2", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/28/4", "trust": 1.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/03/27/1", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-772220", "trust": 1.2 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.2 }, { "db": "PULSESECURE", "id": "SA44845", "trust": 1.2 }, { "db": "MCAFEE", "id": "SB10356", "trust": 1.2 }, { "db": "JVN", "id": "JVNVU92126369", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-001383", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162197", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "163257", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162183", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162114", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162076", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162350", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162383", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162699", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162337", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162196", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162172", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161984", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162201", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162307", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162200", "trust": 0.1 }, { "db": "SEEBUG", "id": "SSVID-99170", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-388130", "trust": 0.1 }, { "db": "ICS CERT", "id": "ICSA-22-104-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-3449", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163815", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169659", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "id": "VAR-202103-1464", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-388130" } ], "trust": 0.6431162642424243 }, "last_update_date": "2024-07-23T21:43:25.615000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2021-119 Software product security information", "trust": 0.8, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "title": "Debian Security Advisories: DSA-4875-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b5207bd1e788bc6e8d94f410cf4801bc" }, { "title": "Red Hat: CVE-2021-3449", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-3449" }, { "title": "Amazon Linux 2: ALAS2-2021-1622", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1622" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-3449 log" }, { "title": "Cisco: Multiple Vulnerabilities in OpenSSL Affecting Cisco Products: March 2021", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=cisco_security_advisories_and_alerts_ciscoproducts\u0026qid=cisco-sa-openssl-2021-ghy28djd" }, { "title": "Hitachi Security Advisories: Vulnerability in JP1/Base and JP1/ File Transmission Server/FTP", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-130" }, { "title": "Tenable Security Advisories: [R1] Tenable.sc 5.18.0 Fixes One Third-party Vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-06" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.13.2 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-05" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Common Services", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-117" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Ops Center Analyzer viewpoint", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-119" }, { "title": "Tenable Security Advisories: [R1] Nessus Network Monitor 5.13.1 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-09" }, { "title": "Tenable Security Advisories: [R1] LCE 6.0.9 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2021-10" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449 " }, { "title": "CVE-2021-3449 OpenSSL \u003c1.1.1k DoS exploit", "trust": 0.1, "url": "https://github.com/gitchangye/cve " }, { "title": "NSAPool-PenTest", "trust": 0.1, "url": "https://github.com/alicemongodin/nsapool-pentest " }, { "title": "Analysis of attack vectors for embedded Linux", "trust": 0.1, "url": "https://github.com/fefi7/attacking_embedded_linux " }, { "title": "openssl-cve", "trust": 0.1, "url": "https://github.com/yonhan3/openssl-cve " }, { "title": "CVE-Check", "trust": 0.1, "url": "https://github.com/falk-werner/cve-check " }, { "title": "SEEKER_dataset", "trust": 0.1, "url": "https://github.com/sf4bin/seeker_dataset " }, { "title": "Year of the Jellyfish (YotJF)", "trust": 0.1, "url": "https://github.com/rnbochsr/yr_of_the_jellyfish " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "Trivy by Aqua security\nRefer this official repository for explore Trivy Action", "trust": 0.1, "url": "https://github.com/thecyberbaby/trivy-by-aquasecurity " }, { "title": "\ud83d\udc31 Catlin Vulnerability Scanner \ud83d\udc31", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "DEVOPS + ACR + TRIVY", "trust": 0.1, "url": "https://github.com/arindam0310018/04-apr-2022-devops__scan-images-in-acr-using-trivy " }, { "title": "Trivy Demo", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " }, { "title": "GitHub Actions CI App Pipeline", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "Awesome Stars", "trust": 0.1, "url": "https://github.com/taielab/awesome-hacking-lists " }, { "title": "podcast-dl-gael", "trust": 0.1, "url": "https://github.com/githubforsnap/podcast-dl-gael " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/matengfei000/sec-tools " }, { "title": "sec-tools", "trust": 0.1, "url": "https://github.com/anquanscan/sec-tools " }, { "title": "\u66f4\u65b0\u4e8e 2023-11-27 08:36:01\n\u5b89\u5168\n\u5f00\u53d1\n\u672a\u5206\u7c7b\n\u6742\u4e03\u6742\u516b", "trust": 0.1, "url": "https://github.com/20142995/sectool " }, { "title": "Vulnerability", "trust": 0.1, "url": "https://github.com/tzwlhack/vulnerability " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/soosmile/poc " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/manas3c/cve-poc " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2021/03/25/openssl_bug_fix/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.3, "url": "https://tools.cisco.com/security/center/content/ciscosecurityadvisory/cisco-sa-openssl-2021-ghy28djd" }, { "trust": 1.3, "url": "https://www.openssl.org/news/secadv/20210325.txt" }, { "trust": 1.3, "url": "https://www.debian.org/security/2021/dsa-4875" }, { "trust": 1.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.2, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-772220.pdf" }, { "trust": 1.2, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44845" }, { "trust": 1.2, "url": "https://psirt.global.sonicwall.com/vuln-detail/snwlid-2021-0013" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210326-0006/" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-05" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-06" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.2, "url": "https://www.tenable.com/security/tns-2021-10" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.2, "url": "https://security.freebsd.org/advisories/freebsd-sa-21:07.openssl.asc" }, { "trust": 1.2, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.2, "url": "https://lists.debian.org/debian-lts-announce/2021/08/msg00029.html" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/1" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/27/2" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/3" }, { "trust": 1.2, "url": "http://www.openwall.com/lists/oss-security/2021/03/28/4" }, { "trust": 1.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10356" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92126369/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=fb9fa6b51defd48157eeb207f52181f735d96148" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10356" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ccbfllvqvilivgzmbjl3ixzgkwqisynp/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/terorie/cve-2021-3449" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-104-05" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2479" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23240" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9951" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23239" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26137" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36242" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25659" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9983" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3528" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25678" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25678" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2130" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/windows_containers/window" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1200" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/listsoftware.html?product=core.service.apachehttp\u0026downloadtype=securitypatches\u0026version=2.4.37" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5038-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3677" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-10/10.18-0ubuntu0.18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-12/12.8-0ubuntu0.20.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/postgresql-13/13.4-0ubuntu0.21.04.1" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-388130" }, { "db": "VULMON", "id": "CVE-2021-3449" }, { "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "db": "PACKETSTORM", "id": "163209" }, { "db": "PACKETSTORM", "id": "163257" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "162183" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "162197" }, { "db": "PACKETSTORM", "id": "163815" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "169659" }, { "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-03-25T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2021-03-25T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-05-06T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2021-06-17T18:34:10", "db": "PACKETSTORM", "id": "163209" }, { "date": "2021-06-23T15:44:15", "db": "PACKETSTORM", "id": "163257" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-04-14T16:40:32", "db": "PACKETSTORM", "id": "162183" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-04-15T13:50:04", "db": "PACKETSTORM", "id": "162197" }, { "date": "2021-08-13T14:20:11", "db": "PACKETSTORM", "id": "163815" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2021-03-25T12:12:12", "db": "PACKETSTORM", "id": "169659" }, { "date": "2021-03-25T15:15:13.450000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-29T00:00:00", "db": "VULHUB", "id": "VHN-388130" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3449" }, { "date": "2021-09-13T07:43:00", "db": "JVNDB", "id": "JVNDB-2021-001383" }, { "date": "2024-06-21T19:15:19.710000", "db": "NVD", "id": "CVE-2021-3449" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "163815" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001383" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "xss", "sources": [ { "db": "PACKETSTORM", "id": "163209" } ], "trust": 0.1 } }
var-202101-0566
Vulnerability from variot
There's a flaw in binutils /bfd/pef.c. An attacker who is able to submit a crafted input file to be processed by the objdump program could cause a null pointer dereference. The greatest threat from this flaw is to application availability. This flaw affects binutils versions prior to 2.34. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202107-24
https://security.gentoo.org/
Severity: Normal Title: Binutils: Multiple vulnerabilities Date: July 10, 2021 Bugs: #678806, #761957, #764170 ID: 202107-24
Synopsis
Multiple vulnerabilities have been found in Binutils, the worst of which could result in a Denial of Service condition.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.35.2 >= 2.35.2
Description
Multiple vulnerabilities have been discovered in Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.35.2"
References
[ 1 ] CVE-2019-9070 https://nvd.nist.gov/vuln/detail/CVE-2019-9070 [ 2 ] CVE-2019-9071 https://nvd.nist.gov/vuln/detail/CVE-2019-9071 [ 3 ] CVE-2019-9072 https://nvd.nist.gov/vuln/detail/CVE-2019-9072 [ 4 ] CVE-2019-9073 https://nvd.nist.gov/vuln/detail/CVE-2019-9073 [ 5 ] CVE-2019-9074 https://nvd.nist.gov/vuln/detail/CVE-2019-9074 [ 6 ] CVE-2019-9075 https://nvd.nist.gov/vuln/detail/CVE-2019-9075 [ 7 ] CVE-2019-9076 https://nvd.nist.gov/vuln/detail/CVE-2019-9076 [ 8 ] CVE-2019-9077 https://nvd.nist.gov/vuln/detail/CVE-2019-9077 [ 9 ] CVE-2020-19599 https://nvd.nist.gov/vuln/detail/CVE-2020-19599 [ 10 ] CVE-2020-35448 https://nvd.nist.gov/vuln/detail/CVE-2020-35448 [ 11 ] CVE-2020-35493 https://nvd.nist.gov/vuln/detail/CVE-2020-35493 [ 12 ] CVE-2020-35494 https://nvd.nist.gov/vuln/detail/CVE-2020-35494 [ 13 ] CVE-2020-35495 https://nvd.nist.gov/vuln/detail/CVE-2020-35495 [ 14 ] CVE-2020-35496 https://nvd.nist.gov/vuln/detail/CVE-2020-35496 [ 15 ] CVE-2020-35507 https://nvd.nist.gov/vuln/detail/CVE-2020-35507
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202107-24
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0566", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": "lt", "trust": 1.0, "vendor": "gnu", "version": "2.34" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "hci compute node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "NVD", "id": "CVE-2020-35495" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.34", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-35495" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "163455" } ], "trust": 0.1 }, "cve": "CVE-2020-35495", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35495", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-377691", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.5, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35495", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-35495", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-085", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377691", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-35495", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377691" }, { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "NVD", "id": "CVE-2020-35495" }, { "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There\u0027s a flaw in binutils /bfd/pef.c. An attacker who is able to submit a crafted input file to be processed by the objdump program could cause a null pointer dereference. The greatest threat from this flaw is to application availability. This flaw affects binutils versions prior to 2.34. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202107-24\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Binutils: Multiple vulnerabilities\n Date: July 10, 2021\n Bugs: #678806, #761957, #764170\n ID: 202107-24\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Binutils, the worst of\nwhich could result in a Denial of Service condition. \n\nBackground\n==========\n\nThe GNU Binutils are a collection of tools to create, modify and\nanalyse binary files. Many of the files use BFD, the Binary File\nDescriptor library, to do low-level manipulation. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.35.2 \u003e= 2.35.2 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.35.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-9070\n https://nvd.nist.gov/vuln/detail/CVE-2019-9070\n[ 2 ] CVE-2019-9071\n https://nvd.nist.gov/vuln/detail/CVE-2019-9071\n[ 3 ] CVE-2019-9072\n https://nvd.nist.gov/vuln/detail/CVE-2019-9072\n[ 4 ] CVE-2019-9073\n https://nvd.nist.gov/vuln/detail/CVE-2019-9073\n[ 5 ] CVE-2019-9074\n https://nvd.nist.gov/vuln/detail/CVE-2019-9074\n[ 6 ] CVE-2019-9075\n https://nvd.nist.gov/vuln/detail/CVE-2019-9075\n[ 7 ] CVE-2019-9076\n https://nvd.nist.gov/vuln/detail/CVE-2019-9076\n[ 8 ] CVE-2019-9077\n https://nvd.nist.gov/vuln/detail/CVE-2019-9077\n[ 9 ] CVE-2020-19599\n https://nvd.nist.gov/vuln/detail/CVE-2020-19599\n[ 10 ] CVE-2020-35448\n https://nvd.nist.gov/vuln/detail/CVE-2020-35448\n[ 11 ] CVE-2020-35493\n https://nvd.nist.gov/vuln/detail/CVE-2020-35493\n[ 12 ] CVE-2020-35494\n https://nvd.nist.gov/vuln/detail/CVE-2020-35494\n[ 13 ] CVE-2020-35495\n https://nvd.nist.gov/vuln/detail/CVE-2020-35495\n[ 14 ] CVE-2020-35496\n https://nvd.nist.gov/vuln/detail/CVE-2020-35496\n[ 15 ] CVE-2020-35507\n https://nvd.nist.gov/vuln/detail/CVE-2020-35507\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202107-24\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n", "sources": [ { "db": "NVD", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "VULHUB", "id": "VHN-377691" }, { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "PACKETSTORM", "id": "163455" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-35495", "trust": 2.7 }, { "db": "PACKETSTORM", "id": "163455", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-015127", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202101-085", "trust": 0.7 }, { "db": "VULHUB", "id": "VHN-377691", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-35495", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377691" }, { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35495" }, { "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "id": "VAR-202101-0566", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377691" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:18:55.647000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a025306 NetAppNetApp\u00a0Advisory", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "title": "GNU binutils Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=138344" }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377691" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "NVD", "id": "CVE-2020-35495" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1911441" }, { "trust": 1.9, "url": "https://security.gentoo.org/glsa/202107-24" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210212-0007/" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35495" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4kok3qwsvoujwj54hvgifwnlwq5zy4s6/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics/" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/binutils-null-pointer-dereference-via-bfd-pef-parse-symbols-34254" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics-for-nps/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163455/gentoo-linux-security-advisory-202107-24.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-performance-server/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2020-35495" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9071" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9077" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9073" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9072" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9074" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35507" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9070" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35496" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9076" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9075" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35494" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377691" }, { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35495" }, { "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377691" }, { "db": "VULMON", "id": "CVE-2020-35495" }, { "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35495" }, { "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULHUB", "id": "VHN-377691" }, { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2020-35495" }, { "date": "2021-09-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "date": "2021-07-11T12:01:11", "db": "PACKETSTORM", "id": "163455" }, { "date": "2021-01-04T15:15:13.667000", "db": "NVD", "id": "CVE-2020-35495" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-02T00:00:00", "db": "VULHUB", "id": "VHN-377691" }, { "date": "2022-09-02T00:00:00", "db": "VULMON", "id": "CVE-2020-35495" }, { "date": "2021-09-10T07:56:00", "db": "JVNDB", "id": "JVNDB-2020-015127" }, { "date": "2023-11-07T03:21:55.620000", "db": "NVD", "id": "CVE-2020-35495" }, { "date": "2022-09-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-085" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-085" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "binutils\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015127" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-085" } ], "trust": 0.6 } }
var-202105-1459
Vulnerability from variot
A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. libwebp is an encoding and decoding library for the WebP image format. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
- Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- Summary:
An update is now available for OpenShift Logging 5.3. Description:
Openshift Logging Bug Fix Release (5.3.0)
Security Fix(es):
- golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-4930-1 security@debian.org https://www.debian.org/security/ Moritz Muehlenhoff June 10, 2021 https://www.debian.org/security/faq
Package : libwebp CVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332
Multiple vulnerabilities were discovered in libwebp, the implementation of the WebP image format, which could result in denial of service, memory disclosure or potentially the execution of arbitrary code if malformed images are processed.
For the stable distribution (buster), these problems have been fixed in version 0.6.1-2+deb10u1.
We recommend that you upgrade your libwebp packages.
For the detailed security status of libwebp please refer to its security tracker page at: https://security-tracker.debian.org/tracker/libwebp
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8 TjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE HI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g AvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k nHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC ha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X FK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs eC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj 0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c twImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ PnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V dmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ= =pN/j -----END PGP SIGNATURE----- . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can
failed to check the operatorcondition/status
resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools
content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy
in oc import-image
without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node
does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13
to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize
property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade
should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version
is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor
, os
and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: MissingShow details on source websitepanel.styles
attribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommand
should exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explain
output of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirror
command 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=true
does not work foroc adm release new
, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccess
anduseAccessReview
doc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy
2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release new
failed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1459", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "libwebp", "scope": "lt", "trust": 1.0, "vendor": "webmproject", "version": "1.0.1" }, { "model": "libwebp", "scope": null, "trust": 0.8, "vendor": "the webm", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:webmproject:libwebp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-36331" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 0.6 }, "cve": "CVE-2020-36331", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 6.4, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 4.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 6.4, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-36331", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 6.4, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "VHN-391910", "impactScore": 4.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.1, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.2, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.1, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-36331", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-36331", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202105-1382", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-391910", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-36331", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391910" }, { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. libwebp is an encoding and decoding library for the WebP image format. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. Summary:\n\nAn update is now available for OpenShift Logging 5.3. Description:\n\nOpenshift Logging Bug Fix Release (5.3.0)\n\nSecurity Fix(es):\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4930-1 security@debian.org\nhttps://www.debian.org/security/ Moritz Muehlenhoff\nJune 10, 2021 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : libwebp\nCVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 \n CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 \n CVE-2020-36331 CVE-2020-36332\n\nMultiple vulnerabilities were discovered in libwebp, the implementation\nof the WebP image format, which could result in denial of service, memory\ndisclosure or potentially the execution of arbitrary code if malformed\nimages are processed. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 0.6.1-2+deb10u1. \n\nWe recommend that you upgrade your libwebp packages. \n\nFor the detailed security status of libwebp please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/libwebp\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8\nTjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE\nHI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g\nAvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k\nnHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC\nha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X\nFK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs\neC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj\n0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c\ntwImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ\nPnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V\ndmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ=\n=pN/j\n-----END PGP SIGNATURE-----\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID: RHSA-2022:5069-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5069\nIssue date: 2022-08-10\nCVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "VULHUB", "id": "VHN-391910" }, { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-36331", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2018-016579", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "164842", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "165287", "trust": 0.7 }, { "db": "CNNVD", "id": "CNNVD-202105-1382", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.3977", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1965", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4254", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2485.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1880", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3905", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1914", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3789", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0245", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1959", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4229", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021072216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061301", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060725", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163645", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-391910", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-36331", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165296", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169076", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168042", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391910" }, { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "id": "VAR-202105-1459", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391910" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T19:24:35.575000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a01956856", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "title": "libwebp Buffer error vulnerability fix", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=151881" }, { "title": "Amazon Linux AMI: ALAS-2023-1740", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2023-1740" }, { "title": "Amazon Linux 2: ALAS2-2023-2031", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-2031" }, { "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "CNNVD", "id": "CNNVD-202105-1382" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-125", "trust": 1.1 }, { "problemtype": "Out-of-bounds read (CWE-125) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391910" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20211112-0001/" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht212601" }, { "trust": 1.8, "url": "https://www.debian.org/security/2021/dsa-4930" }, { "trust": 1.8, "url": "http://seclists.org/fulldisclosure/2021/jul/54" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956856" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0245" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3977" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1959" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165287/red-hat-security-advisory-2021-5127-05.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libwebp-five-vulnerabilities-35580" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1965" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3789" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3905" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1914" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4229" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht212601" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1880" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061301" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4254" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2102" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164842/red-hat-security-advisory-2021-4231-04.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165288/red-hat-security-advisory-2021-5129-06.html" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.3, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.3, "url": "https://issues.jboss.org/):" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/125.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2023-1740.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5128" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5129" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5137" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3575" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30758" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30689" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30682" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-18032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1801" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30795" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30744" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30797" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21779" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27828" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1871" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29338" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26926" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1789" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30663" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3272" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36332" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/libwebp" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43818" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26945" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38593" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23648" }, { "trust": 0.1, "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4156" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5069" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29162" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://10.0.0.7:2379" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1706" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30323" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391910" }, { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391910" }, { "db": "VULMON", "id": "CVE-2020-36331" }, { "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "db": "NVD", "id": "CVE-2020-36331" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-21T00:00:00", "db": "VULHUB", "id": "VHN-391910" }, { "date": "2021-05-21T00:00:00", "db": "VULMON", "id": "CVE-2020-36331" }, { "date": "2022-01-27T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "date": "2021-12-15T15:20:33", "db": "PACKETSTORM", "id": "165286" }, { "date": "2021-12-15T15:22:36", "db": "PACKETSTORM", "id": "165288" }, { "date": "2021-12-15T15:27:05", "db": "PACKETSTORM", "id": "165296" }, { "date": "2022-01-20T17:48:29", "db": "PACKETSTORM", "id": "165631" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2021-06-28T19:12:00", "db": "PACKETSTORM", "id": "169076" }, { "date": "2022-08-10T15:56:22", "db": "PACKETSTORM", "id": "168042" }, { "date": "2021-05-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "date": "2021-05-21T17:15:08.397000", "db": "NVD", "id": "CVE-2020-36331" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-391910" }, { "date": "2023-01-09T00:00:00", "db": "VULMON", "id": "CVE-2020-36331" }, { "date": "2022-01-27T08:46:00", "db": "JVNDB", "id": "JVNDB-2018-016579" }, { "date": "2022-12-09T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1382" }, { "date": "2023-01-09T16:41:59.350000", "db": "NVD", "id": "CVE-2020-36331" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1382" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libwebp\u00a0 Out-of-bounds read vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016579" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "buffer error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1382" } ], "trust": 0.6 } }
var-202104-1514
Vulnerability from variot
GNU Wget through 1.21.1 does not omit the Authorization header upon a redirect to a different origin, a related issue to CVE-2018-1000007. GNU Wget is a set of free software developed by the GNU Project (Gnu Project Development) for downloading on the Internet. It supports downloading through the three most common TCP/IP protocols: HTTP, HTTPS and FTP. There is a security vulnerability in GNU Wget 1.21.1 and earlier versions. The vulnerability is caused by not ignoring Authorization when redirecting to a different source
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202104-1514", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "a250", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "500f", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "wget", "scope": "lte", "trust": 1.0, "vendor": "gnu", "version": "1.21.1" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2021-31879" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:wget:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.21.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:a250_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:a250:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:500f_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:500f:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-31879" } ] }, "cve": "CVE-2021-31879", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-391716", "impactScore": 4.9, "integrityImpact": "PARTIAL", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "CVE-2021-31879", "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.1, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 2.8, "impactScore": 2.7, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-31879", "trust": 1.0, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202104-2167", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-391716", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-31879", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391716" }, { "db": "VULMON", "id": "CVE-2021-31879" }, { "db": "NVD", "id": "CVE-2021-31879" }, { "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "GNU Wget through 1.21.1 does not omit the Authorization header upon a redirect to a different origin, a related issue to CVE-2018-1000007. GNU Wget is a set of free software developed by the GNU Project (Gnu Project Development) for downloading on the Internet. It supports downloading through the three most common TCP/IP protocols: HTTP, HTTPS and FTP. There is a security vulnerability in GNU Wget 1.21.1 and earlier versions. The vulnerability is caused by not ignoring Authorization when redirecting to a different source", "sources": [ { "db": "NVD", "id": "CVE-2021-31879" }, { "db": "VULHUB", "id": "VHN-391716" }, { "db": "VULMON", "id": "CVE-2021-31879" } ], "trust": 1.08 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-31879", "trust": 1.8 }, { "db": "CNNVD", "id": "CNNVD-202104-2167", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-391716", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-31879", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391716" }, { "db": "VULMON", "id": "CVE-2021-31879" }, { "db": "NVD", "id": "CVE-2021-31879" }, { "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "id": "VAR-202104-1514", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391716" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T12:49:13.153000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "GNU Wget Enter the fix for the verification error vulnerability", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=149520" }, { "title": "Debian CVElist Bug Report Logs: CVE-2021-31879", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=ba1029a7c2538da0d8a896c8ad6f31c8" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-31879 log" }, { "title": "Amazon Linux 2022: ALAS2022-2022-134", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-134" }, { "title": "KCC", "trust": 0.1, "url": "https://github.com/dgardella/kcc " }, { "title": "log4jnotes", "trust": 0.1, "url": "https://github.com/kenlavbah/log4jnotes " }, { "title": "devops-demo", "trust": 0.1, "url": "https://github.com/epequeno/devops-demo " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-31879" }, { "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-601", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391716" }, { "db": "NVD", "id": "CVE-2021-31879" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://mail.gnu.org/archive/html/bug-wget/2021-02/msg00002.html" }, { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20210618-0002/" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31879" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/601.html" }, { "trust": 0.1, "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988209" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2022/alas-2022-134.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391716" }, { "db": "VULMON", "id": "CVE-2021-31879" }, { "db": "NVD", "id": "CVE-2021-31879" }, { "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391716" }, { "db": "VULMON", "id": "CVE-2021-31879" }, { "db": "NVD", "id": "CVE-2021-31879" }, { "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-04-29T00:00:00", "db": "VULHUB", "id": "VHN-391716" }, { "date": "2021-04-29T00:00:00", "db": "VULMON", "id": "CVE-2021-31879" }, { "date": "2021-04-29T05:15:08.707000", "db": "NVD", "id": "CVE-2021-31879" }, { "date": "2021-04-29T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-05-13T00:00:00", "db": "VULHUB", "id": "VHN-391716" }, { "date": "2022-05-13T00:00:00", "db": "VULMON", "id": "CVE-2021-31879" }, { "date": "2022-05-13T20:52:24.793000", "db": "NVD", "id": "CVE-2021-31879" }, { "date": "2021-05-07T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-2167" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-2167" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "GNU Wget Input validation error vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-2167" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "input validation error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-2167" } ], "trust": 0.6 } }
var-202205-0743
Vulnerability from variot
LibTIFF master branch has an out-of-bounds read in LZWDecode in libtiff/tif_lzw.c:619, allowing attackers to cause a denial-of-service via a crafted tiff file. For users that compile libtiff from sources, the fix is available with commit b4e79bfa. LibTIFF Exists in an out-of-bounds read vulnerability.Service operation interruption (DoS) It may be in a state. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-10
https://security.gentoo.org/
Severity: Low Title: LibTIFF: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #830981, #837560 ID: 202210-10
Synopsis
Multiple vulnerabilities have been found in LibTIFF, the worst of which could result in denial of service.
Background
LibTIFF provides support for reading and manipulating TIFF (Tagged Image File Format) images.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 media-libs/tiff < 4.4.0 >= 4.4.0
Description
Multiple vulnerabilities have been discovered in LibTIFF. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All LibTIFF users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=media-libs/tiff-4.4.0"
References
[ 1 ] CVE-2022-0561 https://nvd.nist.gov/vuln/detail/CVE-2022-0561 [ 2 ] CVE-2022-0562 https://nvd.nist.gov/vuln/detail/CVE-2022-0562 [ 3 ] CVE-2022-0865 https://nvd.nist.gov/vuln/detail/CVE-2022-0865 [ 4 ] CVE-2022-0891 https://nvd.nist.gov/vuln/detail/CVE-2022-0891 [ 5 ] CVE-2022-0907 https://nvd.nist.gov/vuln/detail/CVE-2022-0907 [ 6 ] CVE-2022-0908 https://nvd.nist.gov/vuln/detail/CVE-2022-0908 [ 7 ] CVE-2022-0909 https://nvd.nist.gov/vuln/detail/CVE-2022-0909 [ 8 ] CVE-2022-0924 https://nvd.nist.gov/vuln/detail/CVE-2022-0924 [ 9 ] CVE-2022-1056 https://nvd.nist.gov/vuln/detail/CVE-2022-1056 [ 10 ] CVE-2022-1210 https://nvd.nist.gov/vuln/detail/CVE-2022-1210 [ 11 ] CVE-2022-1354 https://nvd.nist.gov/vuln/detail/CVE-2022-1354 [ 12 ] CVE-2022-1355 https://nvd.nist.gov/vuln/detail/CVE-2022-1355 [ 13 ] CVE-2022-1622 https://nvd.nist.gov/vuln/detail/CVE-2022-1622 [ 14 ] CVE-2022-1623 https://nvd.nist.gov/vuln/detail/CVE-2022-1623 [ 15 ] CVE-2022-22844 https://nvd.nist.gov/vuln/detail/CVE-2022-22844
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-10
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-10-27-3 Additional information for APPLE-SA-2022-09-12-1 iOS 16
iOS 16 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213446.
Accelerate Framework Available for: iPhone 8 and later Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. CVE-2022-42795: ryuzaki Entry added October 27, 2022
AppleAVD Available for: iPhone 8 and later Impact: An app may be able to cause a denial-of-service Description: A memory corruption issue was addressed with improved state management. CVE-2022-32827: Antonio Zekic (@antoniozekic), Natalie Silvanovich of Google Project Zero, and an anonymous researcher Entry added October 27, 2022
AppleAVD Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: This issue was addressed with improved checks. CVE-2022-32907: Natalie Silvanovich of Google Project Zero, Antonio Zekic (@antoniozekic) and John Aakerblom (@jaakerblom), ABC Research s.r.o, Yinyi Wu, Tommaso Bianco (@cutesmilee__) Entry added October 27, 2022
Apple Neural Engine Available for: iPhone 8 and later Impact: An app may be able to leak sensitive kernel state Description: The issue was addressed with improved memory handling. CVE-2022-32858: Mohamed Ghannam (@_simo36) Entry added October 27, 2022
Apple Neural Engine Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32898: Mohamed Ghannam (@_simo36) CVE-2022-32899: Mohamed Ghannam (@_simo36) CVE-2022-32889: Mohamed Ghannam (@_simo36) Entry added October 27, 2022
Apple TV Available for: iPhone 8 and later Impact: An app may be able to access user-sensitive data Description: The issue was addressed with improved handling of caches. CVE-2022-32909: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 27, 2022
Contacts Available for: iPhone 8 and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed with improved checks. CVE-2022-32854: Holger Fuhrmannek of Deutsche Telekom Security
Crash Reporter Available for: iPhone 8 and later Impact: A user with physical access to an iOS device may be able to read past diagnostic logs Description: This issue was addressed with improved data protection. CVE-2022-32867: Kshitij Kumar and Jai Musunuri of Crowdstrike Entry added October 27, 2022
DriverKit Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32865: Linus Henze of Pinauten GmbH (pinauten.de) Entry added October 27, 2022
Exchange Available for: iPhone 8 and later Impact: A user in a privileged network position may be able to intercept mail credentials Description: A logic issue was addressed with improved restrictions. CVE-2022-32928: an anonymous researcher Entry added October 27, 2022
GPU Drivers Available for: iPhone 8 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2022-26744: an anonymous researcher Entry added October 27, 2022
GPU Drivers Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-32903: an anonymous researcher Entry added October 27, 2022
ImageIO Available for: iPhone 8 and later Impact: Processing an image may lead to a denial-of-service Description: A denial-of-service issue was addressed with improved validation. CVE-2022-1622 Entry added October 27, 2022
Image Processing Available for: iPhone 8 and later Impact: A sandboxed app may be able to determine which app is currently using the camera Description: The issue was addressed with additional restrictions on the observability of app states. CVE-2022-32913: Yiğit Can YILMAZ (@yilmazcanyigit) Entry added October 27, 2022
IOGPUFamily Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32887: an anonymous researcher Entry added October 27, 2022
Kernel Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-32914: Zweig of Kunlun Lab Entry added October 27, 2022
Kernel Available for: iPhone 8 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32866: Linus Henze of Pinauten GmbH (pinauten.de) CVE-2022-32911: Zweig of Kunlun Lab Entry updated October 27, 2022
Kernel Available for: iPhone 8 and later Impact: An app may be able to disclose kernel memory Description: The issue was addressed with improved memory handling. CVE-2022-32864: Linus Henze of Pinauten GmbH (pinauten.de)
Kernel Available for: iPhone 8 and later Impact: An application may be able to execute arbitrary code with kernel privileges. Description: The issue was addressed with improved bounds checks. CVE-2022-32917: an anonymous researcher
Maps Available for: iPhone 8 and later Impact: An app may be able to read sensitive location information Description: A logic issue was addressed with improved restrictions. CVE-2022-32883: Ron Masas, breakpointhq.com
MediaLibrary Available for: iPhone 8 and later Impact: A user may be able to elevate privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2022-32908: an anonymous researcher
Notifications Available for: iPhone 8 and later Impact: A user with physical access to a device may be able to access contacts from the lock screen Description: A logic issue was addressed with improved state management. CVE-2022-32879: Ubeydullah Sümer Entry added October 27, 2022
Photos Available for: iPhone 8 and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed with improved data protection. CVE-2022-32918: an anonymous researcher, Jugal Goradia of Aastha Technologies, Srijan Shivam Mishra of The Hack Report, Evan Ricafort (evanricafort.com) of Invalid Web Security, Amod Raghunath Patwardhan of Pune, India, Ashwani Rajput of Nagarro Software Pvt. Ltd Entry added October 27, 2022
Safari Available for: iPhone 8 and later Impact: Visiting a malicious website may lead to address bar spoofing Description: This issue was addressed with improved checks. CVE-2022-32795: Narendra Bhati of Suma Soft Pvt. Ltd. Pune (India) @imnarendrabhati
Safari Extensions Available for: iPhone 8 and later Impact: A website may be able to track users through Safari web extensions Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 242278 CVE-2022-32868: Michael
Sandbox Available for: iPhone 8 and later Impact: An app may be able to modify protected parts of the file system Description: A logic issue was addressed with improved restrictions. CVE-2022-32881: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 27, 2022
Security Available for: iPhone 8 and later Impact: An app may be able to bypass code signing checks Description: An issue in code signature validation was addressed with improved checks. CVE-2022-42793: Linus Henze of Pinauten GmbH (pinauten.de) Entry added October 27, 2022
Shortcuts Available for: iPhone 8 and later Impact: A person with physical access to an iOS device may be able to access photos from the lock screen Description: A logic issue was addressed with improved restrictions. CVE-2022-32872: Elite Tech Guru
Sidecar Available for: iPhone 8 and later Impact: A user may be able to view restricted content from the lock screen Description: A logic issue was addressed with improved state management. CVE-2022-42790: Om kothawade of Zaprico Digital Entry added October 27, 2022
Siri Available for: iPhone 8 and later Impact: A user with physical access to a device may be able to use Siri to obtain some call history information Description: A logic issue was addressed with improved state management. CVE-2022-32870: Andrew Goldberg of The McCombs School of Business, The University of Texas at Austin (linkedin.com/andrew-goldberg-/) Entry added October 27, 2022
SQLite Available for: iPhone 8 and later Impact: A remote user may be able to cause a denial-of-service Description: This issue was addressed with improved checks. CVE-2021-36690 Entry added October 27, 2022
Time Zone Available for: iPhone 8 and later Impact: Deleted contacts may still appear in spotlight search results Description: A logic issue was addressed with improved state management. CVE-2022-32859 Entry added October 27, 2022
Watch app Available for: iPhone 8 and later Impact: An app may be able to read a persistent device identifier Description: This issue was addressed with improved entitlements. CVE-2022-32835: Guilherme Rambo of Best Buddy Apps (rambo.codes) Entry added October 27, 2022
Weather Available for: iPhone 8 and later Impact: An app may be able to read sensitive location information Description: A logic issue was addressed with improved state management. CVE-2022-32875: an anonymous researcher Entry added October 27, 2022
WebKit Available for: iPhone 8 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. WebKit Bugzilla: 242047 CVE-2022-32888: P1umer (@p1umer) Entry added October 27, 2022
WebKit Available for: iPhone 8 and later Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: The issue was addressed with improved UI handling. WebKit Bugzilla: 243236 CVE-2022-32891: @real_as3617, and an anonymous researcher Entry added October 27, 2022
WebKit Available for: iPhone 8 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. WebKit Bugzilla: 241969 CVE-2022-32886: P1umer, afang5472, xmzyshypnc
WebKit Available for: iPhone 8 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. WebKit Bugzilla: 242762 CVE-2022-32912: Jeonghoon Shin (@singi21a) at Theori working with Trend Micro Zero Day Initiative
WebKit Sandboxing Available for: iPhone 8 and later Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: An access issue was addressed with improvements to the sandbox. WebKit Bugzilla: 243181 CVE-2022-32892: @18楼梦想改造家 and @jq0904 of DBAppSecurity's WeBin lab Entry added October 27, 2022
Wi-Fi Available for: iPhone 8 and later Impact: An app may be able to cause unexpected system termination or write kernel memory Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-32925: Wang Yu of Cyberserval Entry added October 27, 2022
Additional recognition
AirDrop We would like to acknowledge Alexander Heinrich, Milan Stute, and Christian Weinert of Technical University of Darmstadt for their assistance. Entry added October 27, 2022
AppleCredentialManager We would like to acknowledge @jonathandata1 for their assistance. Entry added October 27, 2022
Calendar UI We would like to acknowledge Abhay Kailasia (@abhay_kailasia) of Lakshmi Narain College Of Technology Bhopal for their assistance. Entry added October 27, 2022
FaceTime We would like to acknowledge an anonymous researcher for their assistance. Entry added October 27, 2022
Find My We would like to acknowledge an anonymous researcher for their assistance. Entry added October 27, 2022
Game Center We would like to acknowledge Joshua Jones for their assistance.
iCloud We would like to acknowledge Bülent Aytulun, and an anonymous researcher for their assistance. Entry added October 27, 2022
Identity Services We would like to acknowledge Joshua Jones for their assistance.
Kernel We would like to acknowledge Pan ZhenPeng(@Peterpan0927), Tingting Yin of Tsinghua University, and Min Zheng of Ant Group, and an anonymous researcher for their assistance. Entry added October 27, 2022
Mail We would like to acknowledge an anonymous researcher for their assistance. Entry added October 27, 2022
Notes We would like to acknowledge Edward Riley of Iron Cloud Limited (ironclouduk.com) for their assistance. Entry added October 27, 2022
Photo Booth We would like to acknowledge Prashanth Kannan of Dremio for their assistance. Entry added October 27, 2022
Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance. Entry added October 27, 2022
Shortcuts We would like to acknowledge Shay Dror for their assistance. Entry added October 27, 2022
SOS We would like to acknowledge Xianfeng Lu and Lei Ai of OPPO Amber Security Lab for their assistance. Entry added October 27, 2022
UIKit We would like to acknowledge Aleczander Ewing, Simon de Vegt, and an anonymous researcher for their assistance. Entry added October 27, 2022
WebKit We would like to acknowledge an anonymous researcher for their assistance. Entry added October 27, 2022
WebRTC We would like to acknowledge an anonymous researcher for their assistance. Entry added October 27, 2022
This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/ iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device. To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About. The version after applying this update will be "iOS 16". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNbKpoACgkQ4RjMIDke NxkQ8w/9FMTP02t/AKe0nXZ44UhfMLy7Sx88gpWRHaWKZtdjPADC2kxx1RbVSvrC C5nB6bw2zGppE1V284QitcNG9WrGGTINK6Knshv0PCkWLZnh1sYqX2bYbKmY6Ol7 K+lRk6zicF3k7KcCZRly6UuJ8RvfPpa2wKuVVv5FBPM8bPRuovVRiRxGUWuO7emM ZXyp4n5u+GldW8n8hRK/jxwGGwrKqFmXL9Ecd79I2/4uYmEx6tmoAYuEZs26BfjK Etd1F54PlewmyUKvVlWiwLhpVgygRqkmvW+jKwX46gBzwHFK88B9IV6wf8ZD5JaU Ur+nqEjiqmbYdcfV8pu64eRNnlTiCmD/ehJg8sNG38m9SeqOw3ZNVaQ8+sgoXwsp rpsPDPsXmPqqadxERe7LwLXSm4KtTARdGbEffHAA5eqc+U0ja2u3piqk8ZKTrC6K tORrDjSkKx9AILbds99Wzbnb1rfF/09N1+LPQT7Ac8PCA/kE+XQ+nmSDoInh8PTU rFt3ZW9Ud0q6Y2Ix11WYrb6wOqs/vafaW5zXTnNfgKNvw2zO/9yKYhaqIjlGtLSJ Og/O1sdcPMPisBGQynF7Dj42riQD5RQGbB/GmfgRqUHFXwcWJxFRblkwUxbjuEaR nYRj90cDbUE2wmsE4y4uFfCVpKTQCQCKXuSuBkOQje0KjTDHWac= =I+iq -----END PGP SIGNATURE-----
. CVE-2022-42789: Koh M. Nakagawa of FFRI Security, Inc. Apple is aware of a report that this issue may have been actively exploited. Apple is aware of a report that this issue may have been actively exploited.
Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Debian Security Advisory DSA-5333-1 security@debian.org https://www.debian.org/security/ Aron Xu January 29, 2023 https://www.debian.org/security/faq
Package : tiff CVE ID : CVE-2022-1354 CVE-2022-1355 CVE-2022-1622 CVE-2022-1623 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-3570 CVE-2022-3597 CVE-2022-3599 CVE-2022-3627 CVE-2022-3636 CVE-2022-34526 CVE-2022-48281 Debian Bug : 1011160 1014494 1022555 1024737 1029653
Several buffer overflow, divide by zero or out of bounds read/write vulnerabilities were discovered in tiff, the Tag Image File Format (TIFF) library and tools, which may cause denial of service when processing a crafted TIFF image.
For the stable distribution (bullseye), these problems have been fixed in version 4.2.0-1+deb11u3.
We recommend that you upgrade your tiff packages
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202205-0743", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.6" }, { "model": "tvos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "16.0" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.7" }, { "model": "watchos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "9.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "16.0" }, { "model": "libtiff", "scope": "eq", "trust": 1.0, "vendor": "libtiff", "version": "4.3.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.0" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0" }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "watchos", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "tvos", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "libtiff", "scope": null, "trust": 0.8, "vendor": "libtiff", "version": null }, { "model": "macos", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "NVD", "id": "CVE-2022-1622" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:libtiff:libtiff:4.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "16.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.7", "versionStartIncluding": "11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.6", "versionStartIncluding": "12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:tvos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "16.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-1622" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Apple", "sources": [ { "db": "PACKETSTORM", "id": "169559" }, { "db": "PACKETSTORM", "id": "169585" }, { "db": "PACKETSTORM", "id": "169576" }, { "db": "PACKETSTORM", "id": "169598" }, { "db": "PACKETSTORM", "id": "169589" } ], "trust": 0.5 }, "cve": "CVE-2022-1622", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2022-1622", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-419735", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "OTHER", "availabilityImpact": "High", "baseScore": 5.5, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "JVNDB-2022-011453", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-1622", "trust": 1.8, "value": "MEDIUM" }, { "author": "cve@gitlab.com", "id": "CVE-2022-1622", "trust": 1.0, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202205-2732", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-419735", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2022-1622", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-419735" }, { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "LibTIFF master branch has an out-of-bounds read in LZWDecode in libtiff/tif_lzw.c:619, allowing attackers to cause a denial-of-service via a crafted tiff file. For users that compile libtiff from sources, the fix is available with commit b4e79bfa. LibTIFF Exists in an out-of-bounds read vulnerability.Service operation interruption (DoS) It may be in a state. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-10\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: LibTIFF: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #830981, #837560\n ID: 202210-10\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in LibTIFF, the worst of which\ncould result in denial of service. \n\nBackground\n==========\n\nLibTIFF provides support for reading and manipulating TIFF (Tagged Image\nFile Format) images. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 media-libs/tiff \u003c 4.4.0 \u003e= 4.4.0\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in LibTIFF. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll LibTIFF users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=media-libs/tiff-4.4.0\"\n\nReferences\n==========\n\n[ 1 ] CVE-2022-0561\n https://nvd.nist.gov/vuln/detail/CVE-2022-0561\n[ 2 ] CVE-2022-0562\n https://nvd.nist.gov/vuln/detail/CVE-2022-0562\n[ 3 ] CVE-2022-0865\n https://nvd.nist.gov/vuln/detail/CVE-2022-0865\n[ 4 ] CVE-2022-0891\n https://nvd.nist.gov/vuln/detail/CVE-2022-0891\n[ 5 ] CVE-2022-0907\n https://nvd.nist.gov/vuln/detail/CVE-2022-0907\n[ 6 ] CVE-2022-0908\n https://nvd.nist.gov/vuln/detail/CVE-2022-0908\n[ 7 ] CVE-2022-0909\n https://nvd.nist.gov/vuln/detail/CVE-2022-0909\n[ 8 ] CVE-2022-0924\n https://nvd.nist.gov/vuln/detail/CVE-2022-0924\n[ 9 ] CVE-2022-1056\n https://nvd.nist.gov/vuln/detail/CVE-2022-1056\n[ 10 ] CVE-2022-1210\n https://nvd.nist.gov/vuln/detail/CVE-2022-1210\n[ 11 ] CVE-2022-1354\n https://nvd.nist.gov/vuln/detail/CVE-2022-1354\n[ 12 ] CVE-2022-1355\n https://nvd.nist.gov/vuln/detail/CVE-2022-1355\n[ 13 ] CVE-2022-1622\n https://nvd.nist.gov/vuln/detail/CVE-2022-1622\n[ 14 ] CVE-2022-1623\n https://nvd.nist.gov/vuln/detail/CVE-2022-1623\n[ 15 ] CVE-2022-22844\n https://nvd.nist.gov/vuln/detail/CVE-2022-22844\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-10\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-10-27-3 Additional information for APPLE-SA-2022-09-12-1 iOS 16\n\niOS 16 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213446. \n\nAccelerate Framework\nAvailable for: iPhone 8 and later\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2022-42795: ryuzaki\nEntry added October 27, 2022\n\nAppleAVD\nAvailable for: iPhone 8 and later\nImpact: An app may be able to cause a denial-of-service\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-32827: Antonio Zekic (@antoniozekic), Natalie Silvanovich of\nGoogle Project Zero, and an anonymous researcher\nEntry added October 27, 2022\n\nAppleAVD\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: This issue was addressed with improved checks. \nCVE-2022-32907: Natalie Silvanovich of Google Project Zero, Antonio\nZekic (@antoniozekic) and John Aakerblom (@jaakerblom), ABC Research\ns.r.o, Yinyi Wu, Tommaso Bianco (@cutesmilee__)\nEntry added October 27, 2022\n\nApple Neural Engine\nAvailable for: iPhone 8 and later\nImpact: An app may be able to leak sensitive kernel state\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32858: Mohamed Ghannam (@_simo36)\nEntry added October 27, 2022\n\nApple Neural Engine\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32898: Mohamed Ghannam (@_simo36)\nCVE-2022-32899: Mohamed Ghannam (@_simo36)\nCVE-2022-32889: Mohamed Ghannam (@_simo36)\nEntry added October 27, 2022\n\nApple TV\nAvailable for: iPhone 8 and later\nImpact: An app may be able to access user-sensitive data\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-32909: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 27, 2022\n\nContacts\nAvailable for: iPhone 8 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed with improved checks. \nCVE-2022-32854: Holger Fuhrmannek of Deutsche Telekom Security\n\nCrash Reporter\nAvailable for: iPhone 8 and later\nImpact: A user with physical access to an iOS device may be able to\nread past diagnostic logs\nDescription: This issue was addressed with improved data protection. \nCVE-2022-32867: Kshitij Kumar and Jai Musunuri of Crowdstrike\nEntry added October 27, 2022\n\nDriverKit\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32865: Linus Henze of Pinauten GmbH (pinauten.de)\nEntry added October 27, 2022\n\nExchange\nAvailable for: iPhone 8 and later\nImpact: A user in a privileged network position may be able to\nintercept mail credentials\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-32928: an anonymous researcher\nEntry added October 27, 2022\n\nGPU Drivers\nAvailable for: iPhone 8 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-26744: an anonymous researcher\nEntry added October 27, 2022\n\nGPU Drivers\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-32903: an anonymous researcher\nEntry added October 27, 2022\n\nImageIO\nAvailable for: iPhone 8 and later\nImpact: Processing an image may lead to a denial-of-service\nDescription: A denial-of-service issue was addressed with improved\nvalidation. \nCVE-2022-1622\nEntry added October 27, 2022\n\nImage Processing\nAvailable for: iPhone 8 and later\nImpact: A sandboxed app may be able to determine which app is\ncurrently using the camera\nDescription: The issue was addressed with additional restrictions on\nthe observability of app states. \nCVE-2022-32913: Yi\u011fit Can YILMAZ (@yilmazcanyigit)\nEntry added October 27, 2022\n\nIOGPUFamily\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32887: an anonymous researcher\nEntry added October 27, 2022\n\nKernel\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-32914: Zweig of Kunlun Lab\nEntry added October 27, 2022\n\nKernel\nAvailable for: iPhone 8 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32866: Linus Henze of Pinauten GmbH (pinauten.de)\nCVE-2022-32911: Zweig of Kunlun Lab\nEntry updated October 27, 2022\n\nKernel\nAvailable for: iPhone 8 and later\nImpact: An app may be able to disclose kernel memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32864: Linus Henze of Pinauten GmbH (pinauten.de)\n\nKernel\nAvailable for: iPhone 8 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges. \nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-32917: an anonymous researcher \n\nMaps\nAvailable for: iPhone 8 and later\nImpact: An app may be able to read sensitive location information\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-32883: Ron Masas, breakpointhq.com\n\nMediaLibrary\nAvailable for: iPhone 8 and later\nImpact: A user may be able to elevate privileges\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2022-32908: an anonymous researcher\n\nNotifications\nAvailable for: iPhone 8 and later\nImpact: A user with physical access to a device may be able to access\ncontacts from the lock screen\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-32879: Ubeydullah S\u00fcmer\nEntry added October 27, 2022\n\nPhotos\nAvailable for: iPhone 8 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed with improved data protection. \nCVE-2022-32918: an anonymous researcher, Jugal Goradia of Aastha\nTechnologies, Srijan Shivam Mishra of The Hack Report, Evan Ricafort\n(evanricafort.com) of Invalid Web Security, Amod Raghunath Patwardhan\nof Pune, India, Ashwani Rajput of Nagarro Software Pvt. Ltd\nEntry added October 27, 2022\n\nSafari\nAvailable for: iPhone 8 and later\nImpact: Visiting a malicious website may lead to address bar spoofing\nDescription: This issue was addressed with improved checks. \nCVE-2022-32795: Narendra Bhati of Suma Soft Pvt. Ltd. Pune (India)\n@imnarendrabhati\n\nSafari Extensions\nAvailable for: iPhone 8 and later\nImpact: A website may be able to track users through Safari web\nextensions\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 242278\nCVE-2022-32868: Michael\n\nSandbox\nAvailable for: iPhone 8 and later\nImpact: An app may be able to modify protected parts of the file\nsystem\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-32881: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 27, 2022\n\nSecurity\nAvailable for: iPhone 8 and later\nImpact: An app may be able to bypass code signing checks\nDescription: An issue in code signature validation was addressed with\nimproved checks. \nCVE-2022-42793: Linus Henze of Pinauten GmbH (pinauten.de)\nEntry added October 27, 2022\n\nShortcuts\nAvailable for: iPhone 8 and later\nImpact: A person with physical access to an iOS device may be able to\naccess photos from the lock screen\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-32872: Elite Tech Guru\n\nSidecar\nAvailable for: iPhone 8 and later\nImpact: A user may be able to view restricted content from the lock\nscreen\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42790: Om kothawade of Zaprico Digital\nEntry added October 27, 2022\n\nSiri\nAvailable for: iPhone 8 and later\nImpact: A user with physical access to a device may be able to use\nSiri to obtain some call history information\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-32870: Andrew Goldberg of The McCombs School of Business,\nThe University of Texas at Austin (linkedin.com/andrew-goldberg-/)\nEntry added October 27, 2022\n\nSQLite\nAvailable for: iPhone 8 and later\nImpact: A remote user may be able to cause a denial-of-service\nDescription: This issue was addressed with improved checks. \nCVE-2021-36690\nEntry added October 27, 2022\n\nTime Zone\nAvailable for: iPhone 8 and later\nImpact: Deleted contacts may still appear in spotlight search results\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-32859\nEntry added October 27, 2022\n\nWatch app\nAvailable for: iPhone 8 and later\nImpact: An app may be able to read a persistent device identifier\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-32835: Guilherme Rambo of Best Buddy Apps (rambo.codes)\nEntry added October 27, 2022\n\nWeather\nAvailable for: iPhone 8 and later\nImpact: An app may be able to read sensitive location information\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-32875: an anonymous researcher\nEntry added October 27, 2022\n\nWebKit\nAvailable for: iPhone 8 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nWebKit Bugzilla: 242047\nCVE-2022-32888: P1umer (@p1umer)\nEntry added October 27, 2022\n\nWebKit\nAvailable for: iPhone 8 and later\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: The issue was addressed with improved UI handling. \nWebKit Bugzilla: 243236\nCVE-2022-32891: @real_as3617, and an anonymous researcher\nEntry added October 27, 2022\n\nWebKit\nAvailable for: iPhone 8 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 241969\nCVE-2022-32886: P1umer, afang5472, xmzyshypnc\n\nWebKit\nAvailable for: iPhone 8 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nWebKit Bugzilla: 242762\nCVE-2022-32912: Jeonghoon Shin (@singi21a) at Theori working with\nTrend Micro Zero Day Initiative\n\nWebKit Sandboxing\nAvailable for: iPhone 8 and later\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: An access issue was addressed with improvements to the\nsandbox. \nWebKit Bugzilla: 243181\nCVE-2022-32892: @18\u697c\u68a6\u60f3\u6539\u9020\u5bb6 and @jq0904 of DBAppSecurity\u0027s WeBin lab\nEntry added October 27, 2022\n\nWi-Fi\nAvailable for: iPhone 8 and later\nImpact: An app may be able to cause unexpected system termination or\nwrite kernel memory\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-32925: Wang Yu of Cyberserval\nEntry added October 27, 2022\n\nAdditional recognition\n\nAirDrop\nWe would like to acknowledge Alexander Heinrich, Milan Stute, and\nChristian Weinert of Technical University of Darmstadt for their\nassistance. \nEntry added October 27, 2022\n\nAppleCredentialManager\nWe would like to acknowledge @jonathandata1 for their assistance. \nEntry added October 27, 2022\n\nCalendar UI\nWe would like to acknowledge Abhay Kailasia (@abhay_kailasia) of\nLakshmi Narain College Of Technology Bhopal for their assistance. \nEntry added October 27, 2022\n\nFaceTime\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added October 27, 2022\n\nFind My\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added October 27, 2022\n\nGame Center\nWe would like to acknowledge Joshua Jones for their assistance. \n\niCloud\nWe would like to acknowledge B\u00fclent Aytulun, and an anonymous\nresearcher for their assistance. \nEntry added October 27, 2022\n\nIdentity Services\nWe would like to acknowledge Joshua Jones for their assistance. \n\nKernel\nWe would like to acknowledge Pan ZhenPeng(@Peterpan0927), Tingting\nYin of Tsinghua University, and Min Zheng of Ant Group, and an\nanonymous researcher for their assistance. \nEntry added October 27, 2022\n\nMail\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added October 27, 2022\n\nNotes\nWe would like to acknowledge Edward Riley of Iron Cloud Limited\n(ironclouduk.com) for their assistance. \nEntry added October 27, 2022\n\nPhoto Booth\nWe would like to acknowledge Prashanth Kannan of Dremio for their\nassistance. \nEntry added October 27, 2022\n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \nEntry added October 27, 2022\n\nShortcuts\nWe would like to acknowledge Shay Dror for their assistance. \nEntry added October 27, 2022\n\nSOS\nWe would like to acknowledge Xianfeng Lu and Lei Ai of OPPO Amber\nSecurity Lab for their assistance. \nEntry added October 27, 2022\n\nUIKit\nWe would like to acknowledge Aleczander Ewing, Simon de Vegt, and an\nanonymous researcher for their assistance. \nEntry added October 27, 2022\n\nWebKit\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added October 27, 2022\n\nWebRTC\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added October 27, 2022\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/ iTunes and Software Update on the\ndevice will automatically check Apple\u0027s update server on its weekly\nschedule. When an update is detected, it is downloaded and the option\nto be installed is presented to the user when the iOS device is\ndocked. We recommend applying the update immediately if possible. \nSelecting Don\u0027t Install will present the option the next time you\nconnect your iOS device. The automatic update process may take up to\na week depending on the day that iTunes or the device checks for\nupdates. You may manually obtain the update via the Check for Updates\nbutton within iTunes, or the Software Update on your device. To\ncheck that the iPhone, iPod touch, or iPad has been updated: *\nNavigate to Settings * Select General * Select About. The version\nafter applying this update will be \"iOS 16\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNbKpoACgkQ4RjMIDke\nNxkQ8w/9FMTP02t/AKe0nXZ44UhfMLy7Sx88gpWRHaWKZtdjPADC2kxx1RbVSvrC\nC5nB6bw2zGppE1V284QitcNG9WrGGTINK6Knshv0PCkWLZnh1sYqX2bYbKmY6Ol7\nK+lRk6zicF3k7KcCZRly6UuJ8RvfPpa2wKuVVv5FBPM8bPRuovVRiRxGUWuO7emM\nZXyp4n5u+GldW8n8hRK/jxwGGwrKqFmXL9Ecd79I2/4uYmEx6tmoAYuEZs26BfjK\nEtd1F54PlewmyUKvVlWiwLhpVgygRqkmvW+jKwX46gBzwHFK88B9IV6wf8ZD5JaU\nUr+nqEjiqmbYdcfV8pu64eRNnlTiCmD/ehJg8sNG38m9SeqOw3ZNVaQ8+sgoXwsp\nrpsPDPsXmPqqadxERe7LwLXSm4KtTARdGbEffHAA5eqc+U0ja2u3piqk8ZKTrC6K\ntORrDjSkKx9AILbds99Wzbnb1rfF/09N1+LPQT7Ac8PCA/kE+XQ+nmSDoInh8PTU\nrFt3ZW9Ud0q6Y2Ix11WYrb6wOqs/vafaW5zXTnNfgKNvw2zO/9yKYhaqIjlGtLSJ\nOg/O1sdcPMPisBGQynF7Dj42riQD5RQGbB/GmfgRqUHFXwcWJxFRblkwUxbjuEaR\nnYRj90cDbUE2wmsE4y4uFfCVpKTQCQCKXuSuBkOQje0KjTDHWac=\n=I+iq\n-----END PGP SIGNATURE-----\n\n\n. \nCVE-2022-42789: Koh M. Nakagawa of FFRI Security, Inc. Apple is aware of a report that this issue may\nhave been actively exploited. Apple is aware of a report that this issue\nmay have been actively exploited. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641 To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\". Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5333-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 29, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : tiff\nCVE ID : CVE-2022-1354 CVE-2022-1355 CVE-2022-1622 CVE-2022-1623 \n CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 \n CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 \n CVE-2022-2869 CVE-2022-2953 CVE-2022-3570 CVE-2022-3597 \n CVE-2022-3599 CVE-2022-3627 CVE-2022-3636 CVE-2022-34526\n CVE-2022-48281\nDebian Bug : 1011160 1014494 1022555 1024737 1029653\n\nSeveral buffer overflow, divide by zero or out of bounds read/write\nvulnerabilities were discovered in tiff, the Tag Image File Format (TIFF)\nlibrary and tools, which may cause denial of service when processing a\ncrafted TIFF image. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 4.2.0-1+deb11u3. \n\nWe recommend that you upgrade your tiff packages", "sources": [ { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "VULHUB", "id": "VHN-419735" }, { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "PACKETSTORM", "id": "169563" }, { "db": "PACKETSTORM", "id": "169559" }, { "db": "PACKETSTORM", "id": "169585" }, { "db": "PACKETSTORM", "id": "169576" }, { "db": "PACKETSTORM", "id": "169598" }, { "db": "PACKETSTORM", "id": "169589" }, { "db": "PACKETSTORM", "id": "170783" } ], "trust": 2.43 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-419735", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-419735" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-1622", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "169598", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-011453", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "170783", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2022060633", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5473", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5300", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5462", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202205-2732", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "169589", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169563", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169576", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169559", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169585", "trust": 0.2 }, { "db": "VULHUB", "id": "VHN-419735", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2022-1622", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-419735" }, { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "PACKETSTORM", "id": "169563" }, { "db": "PACKETSTORM", "id": "169559" }, { "db": "PACKETSTORM", "id": "169585" }, { "db": "PACKETSTORM", "id": "169576" }, { "db": "PACKETSTORM", "id": "169598" }, { "db": "PACKETSTORM", "id": "169589" }, { "db": "PACKETSTORM", "id": "170783" }, { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "id": "VAR-202205-0743", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-419735" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:02:33.906000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "HT213488", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/c7iwztb4j2n4f5or5qy4vhdskwkzswn3/" }, { "title": "Amazon Linux 2022: ALAS2022-2022-094", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-094" }, { "title": "Debian Security Advisories: DSA-5333-1 tiff -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=c77904c23e5b132ffe7c410eba93e432" }, { "title": "Amazon Linux 2022: ALAS2022-2022-183", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-183" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-125", "trust": 1.1 }, { "problemtype": "Out-of-bounds read (CWE-125) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-419735" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "NVD", "id": "CVE-2022-1622" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://gitlab.com/gitlab-org/cves/-/blob/master/2022/cve-2022-1622.json" }, { "trust": 1.8, "url": "http://seclists.org/fulldisclosure/2022/oct/41" }, { "trust": 1.8, "url": "https://gitlab.com/libtiff/libtiff/-/commit/b4e79bfa0c7d2d08f6f1e7ec38143fc8cb11394a" }, { "trust": 1.8, "url": "https://gitlab.com/libtiff/libtiff/-/issues/410" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20220616-0005/" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213443" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213444" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213446" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213486" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213487" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht213488" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1622" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/28" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/39" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/c7iwztb4j2n4f5or5qy4vhdskwkzswn3/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/uxafop6qqrnzd3hpz6bmcezzom4yizmk/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/uxafop6qqrnzd3hpz6bmcezzom4yizmk/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/c7iwztb4j2n4f5or5qy4vhdskwkzswn3/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170783/debian-security-advisory-5333-1.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libtiff-out-of-bounds-memory-reading-via-lzwdecode-38292" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169598/apple-security-advisory-2022-10-27-13.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5462" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5473" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5300" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-1622/" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht213488" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022060633" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-1622" }, { "trust": 0.5, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.5, "url": "https://support.apple.com/en-us/ht201222." }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32866" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32864" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36690" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32854" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32881" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1355" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1623" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1354" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32858" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32835" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32875" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1720" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2042" }, { "trust": 0.2, "url": "https://support.apple.com/downloads/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2124" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39537" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2000" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32888" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32879" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32886" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/125.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2022/alas-2022-094.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1210" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0908" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0907" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22844" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0909" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202210-10" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0924" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0891" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32867" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32859" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26744" }, { "trust": 0.1, "url": "https://support.apple.com/ht213446." }, { "trust": 0.1, "url": "https://www.apple.com/itunes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32865" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32868" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32795" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2125" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32877" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2126" }, { "trust": 0.1, "url": "https://support.apple.com/ht213443." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0319" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0368" }, { "trust": 0.1, "url": "https://support.apple.com/ht213444." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0351" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht204641" }, { "trust": 0.1, "url": "https://support.apple.com/ht213486." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32870" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32907" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32891" }, { "trust": 0.1, "url": "https://support.apple.com/ht213487." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32912" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32908" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32911" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2953" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2869" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2867" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2868" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/tiff" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521" } ], "sources": [ { "db": "VULHUB", "id": "VHN-419735" }, { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "PACKETSTORM", "id": "169563" }, { "db": "PACKETSTORM", "id": "169559" }, { "db": "PACKETSTORM", "id": "169585" }, { "db": "PACKETSTORM", "id": "169576" }, { "db": "PACKETSTORM", "id": "169598" }, { "db": "PACKETSTORM", "id": "169589" }, { "db": "PACKETSTORM", "id": "170783" }, { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-419735" }, { "db": "VULMON", "id": "CVE-2022-1622" }, { "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "db": "PACKETSTORM", "id": "169563" }, { "db": "PACKETSTORM", "id": "169559" }, { "db": "PACKETSTORM", "id": "169585" }, { "db": "PACKETSTORM", "id": "169576" }, { "db": "PACKETSTORM", "id": "169598" }, { "db": "PACKETSTORM", "id": "169589" }, { "db": "PACKETSTORM", "id": "170783" }, { "db": "NVD", "id": "CVE-2022-1622" }, { "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-05-11T00:00:00", "db": "VULHUB", "id": "VHN-419735" }, { "date": "2022-05-11T00:00:00", "db": "VULMON", "id": "CVE-2022-1622" }, { "date": "2023-08-22T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "date": "2022-10-31T14:24:25", "db": "PACKETSTORM", "id": "169563" }, { "date": "2022-10-31T14:22:02", "db": "PACKETSTORM", "id": "169559" }, { "date": "2022-10-31T14:50:18", "db": "PACKETSTORM", "id": "169585" }, { "date": "2022-10-31T14:42:57", "db": "PACKETSTORM", "id": "169576" }, { "date": "2022-10-31T14:56:26", "db": "PACKETSTORM", "id": "169598" }, { "date": "2022-10-31T14:51:24", "db": "PACKETSTORM", "id": "169589" }, { "date": "2023-01-30T16:31:59", "db": "PACKETSTORM", "id": "170783" }, { "date": "2022-05-11T15:15:09.237000", "db": "NVD", "id": "CVE-2022-1622" }, { "date": "2022-05-10T00:00:00", "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-07T00:00:00", "db": "VULHUB", "id": "VHN-419735" }, { "date": "2022-11-07T00:00:00", "db": "VULMON", "id": "CVE-2022-1622" }, { "date": "2023-08-22T07:50:00", "db": "JVNDB", "id": "JVNDB-2022-011453" }, { "date": "2023-11-07T03:42:03.737000", "db": "NVD", "id": "CVE-2022-1622" }, { "date": "2023-02-01T00:00:00", "db": "CNNVD", "id": "CNNVD-202205-2732" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202205-2732" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "LibTIFF\u00a0 Out-of-bounds read vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-011453" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "buffer error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202205-2732" } ], "trust": 0.6 } }
var-202105-1460
Vulnerability from variot
A flaw was found in libwebp in versions before 1.0.1. A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. libwebp Is vulnerable to the use of freed memory.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7
iOS 14.7 and iPadOS 14.7 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212601.
iOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021
ActionKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A shortcut may be able to bypass Internet permission requirements Description: An input validation issue was addressed with improved input validation. CVE-2021-30763: Zachary Keffaber (@QuickUpdate5)
Audio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30781: tr3e
AVEVideoEncoder Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2021-30748: George Nosenko
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab
CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Playing a malicious audio file may lead to an unexpected application termination Description: A logic issue was addressed with improved validation. CVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab
CoreGraphics Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A race condition was addressed with improved state handling. CVE-2021-30786: ryuzaki
CoreText Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of Knownsec 404 team
Crash Reporter Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2021-30774: Yizhuo Wang of Group of Software Security In Progress (G.O.S.S.I.P) at Shanghai Jiao Tong University
CVMS Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video Communications
dyld Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved validation. CVE-2021-30768: Linus Henze (pinauten.de)
Find My Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to access Find My data Description: A permissions issue was addressed with improved validation. CVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2021-30760: Sunglin of Knownsec 404 team
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted tiff file may lead to a denial-of-service or potentially disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative
FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: A stack overflow was addressed with improved input validation. CVE-2021-30759: hjy79425575 working with Trend Micro Zero Day Initiative
Identity Service Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass code signing checks Description: An issue in code signature validation was addressed with improved checks. CVE-2021-30773: Linus Henze (pinauten.de)
Image Processing Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30802: Matthew Denton of Google Chrome Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security
ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of Trend Micro
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A logic issue was addressed with improved state management. CVE-2021-30769: Linus Henze (pinauten.de)
Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker that has already achieved kernel code execution may be able to bypass kernel memory mitigations Description: A logic issue was addressed with improved validation. CVE-2021-30770: Linus Henze (pinauten.de)
libxml2 Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-3518
Measure Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Multiple issues in libwebp Description: Multiple issues were addressed by updating to version 1.2.0. CVE-2018-25010 CVE-2018-25011 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to a denial of service Description: A logic issue was addressed with improved validation. CVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-30792: Anonymous working with Trend Micro Zero Day Initiative
Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted file may disclose user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30791: Anonymous working with Trend Micro Zero Day Initiative
TCC Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to bypass certain Privacy preferences Description: A logic issue was addressed with improved state management. CVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-30758: Christoph Guttandin of Media Codings
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30795: Sergei Glazunov of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: This issue was addressed with improved checks. CVE-2021-30797: Ivan Fratric of Google Project Zero
WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30799: Sergei Glazunov of Google Project Zero
Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Joining a malicious Wi-Fi network may result in a denial of service or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri
Additional recognition
Assets We would like to acknowledge Cees Elzinga for their assistance.
CoreText We would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for their assistance.
Safari We would like to acknowledge an anonymous researcher for their assistance.
Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.
Installation note:
This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/
iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.
To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "14.7"
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6 jjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47 mxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3 DM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L K0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5 3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM JiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1 FSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl r1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+ Wl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc qmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo jOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\x8e1h -----END PGP SIGNATURE-----
. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
All OpenShift Container Platform 4.6 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or proto payload 1979134 - Placeholder bug for OCP 4.6.0 extras release
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: libwebp security update Advisory ID: RHSA-2021:2260-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:2260 Issue date: 2021-06-07 CVE Names: CVE-2018-25011 CVE-2020-36328 CVE-2020-36329 =====================================================================
- Summary:
An update for libwebp is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The libwebp packages provide a library and tools for the WebP graphics format. WebP is an image format with a lossy compression of digital photographic images. WebP consists of a codec based on the VP8 format, and a container based on the Resource Interchange File Format (RIFF). Webmasters, web developers and browser developers can use WebP to compress, archive, and distribute digital images more efficiently.
Security Fix(es):
-
libwebp: heap-based buffer overflow in PutLE16() (CVE-2018-25011)
-
libwebp: heap-based buffer overflow in WebPDecode*Into functions (CVE-2020-36328)
-
libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c (CVE-2020-36329)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1956829 - CVE-2020-36328 libwebp: heap-based buffer overflow in WebPDecode*Into functions 1956843 - CVE-2020-36329 libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c 1956919 - CVE-2018-25011 libwebp: heap-based buffer overflow in PutLE16()
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
ppc64: libwebp-0.3.0-10.el7_9.ppc.rpm libwebp-0.3.0-10.el7_9.ppc64.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm
ppc64le: libwebp-0.3.0-10.el7_9.ppc64le.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm
s390x: libwebp-0.3.0-10.el7_9.s390.rpm libwebp-0.3.0-10.el7_9.s390x.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: libwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm libwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm libwebp-devel-0.3.0-10.el7_9.ppc.rpm libwebp-devel-0.3.0-10.el7_9.ppc64.rpm libwebp-java-0.3.0-10.el7_9.ppc64.rpm libwebp-tools-0.3.0-10.el7_9.ppc64.rpm
ppc64le: libwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm libwebp-devel-0.3.0-10.el7_9.ppc64le.rpm libwebp-java-0.3.0-10.el7_9.ppc64le.rpm libwebp-tools-0.3.0-10.el7_9.ppc64le.rpm
s390x: libwebp-debuginfo-0.3.0-10.el7_9.s390.rpm libwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm libwebp-devel-0.3.0-10.el7_9.s390.rpm libwebp-devel-0.3.0-10.el7_9.s390x.rpm libwebp-java-0.3.0-10.el7_9.s390x.rpm libwebp-tools-0.3.0-10.el7_9.s390x.rpm
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: libwebp-0.3.0-10.el7_9.src.rpm
x86_64: libwebp-0.3.0-10.el7_9.i686.rpm libwebp-0.3.0-10.el7_9.x86_64.rpm libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: libwebp-debuginfo-0.3.0-10.el7_9.i686.rpm libwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm libwebp-devel-0.3.0-10.el7_9.i686.rpm libwebp-devel-0.3.0-10.el7_9.x86_64.rpm libwebp-java-0.3.0-10.el7_9.x86_64.rpm libwebp-tools-0.3.0-10.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-25011 https://access.redhat.com/security/cve/CVE-2020-36328 https://access.redhat.com/security/cve/CVE-2020-36329 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYL4OxtzjgjWX9erEAQi1Yw//ZajpWKH7bKTBXifw2DXrc61fOReKCwR9 sQ/djSkMMo+hwhFNtqq9zHDmI81tuOzBRgzA0FzA6qeNZGzsJmNX/RrNgnep9um7 X08Dvb6+5VuHWBrrBv26wV5wGq/t2VKgGXSoJi6CDDDRlLn/RiAJzuZqhdhp3Ijn xBHIDIEYoNTYoDvbvZUVhY1kRKJ2sr3UxjcWPqDCNZdu51Z8ssW5up/Uh3NaY8yv iB7PIoIHrtBD0nGQcy5h4qE47wFbe9RdLTOaqGDAGaOrHWWT56eC72YnCYKMxO4K 8X9EXjhEmmH4a4Pl4dND7D1wiiOQe5kSA8IhYdgHVZQyo9WBJTD6g6C5IERwwjat s3Z7vhzA+/cLEo8+Jc5orRGoLArU5rOl4uqh64AEPaON9UB8bMOnqm24y+Ebyi0B S+zZ2kQ1FGeQIMnrjAer3OUcVnf26e6qNWBK+HCjdfmbhgtZxTtXyOKcM4lSFVcm LY8pLMWzZpcSCpYh15YtRRCWr4bJyX1UD8V3l2Zzek9zmFq5ogVX78KBYV3c4oWn ReVMDEpXb3bYoV/EsMk7WOaDBKM1eU2OjVp2e7r2Fnt8GESxSpZ1pKegkxXdPnmX EmPhXKZNnwh4Z4Aw2AYIsQVo9QTyvCnZjfjAy9WfIqbyg8OTGJOeQqQLlKsq6ddb YXjUcIgJv2g= =kWSg -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - aarch64, ppc64le, s390x, x86_64
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-4930-1 security@debian.org https://www.debian.org/security/ Moritz Muehlenhoff June 10, 2021 https://www.debian.org/security/faq
Package : libwebp CVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332
Multiple vulnerabilities were discovered in libwebp, the implementation of the WebP image format, which could result in denial of service, memory disclosure or potentially the execution of arbitrary code if malformed images are processed.
For the stable distribution (buster), these problems have been fixed in version 0.6.1-2+deb10u1.
We recommend that you upgrade your libwebp packages
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1460", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "libwebp", "scope": "lt", "trust": 1.0, "vendor": "webmproject", "version": "1.0.1" }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "libwebp", "scope": null, "trust": 0.8, "vendor": "the webm", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:webmproject:libwebp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-36329" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" } ], "trust": 0.5 }, "cve": "CVE-2020-36329", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 7.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-36329", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "VHN-391908", "impactScore": 6.4, "integrityImpact": "PARTIAL", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-36329", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-36329", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202105-1393", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-391908", "trust": 0.1, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2020-36329", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391908" }, { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A flaw was found in libwebp in versions before 1.0.1. A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. libwebp Is vulnerable to the use of freed memory.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-07-21-1 iOS 14.7 and iPadOS 14.7\n\niOS 14.7 and iPadOS 14.7 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212601. \n\niOS 14.7 released July 19, 2021; iPadOS 14.7 released July 21, 2021\n\nActionKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A shortcut may be able to bypass Internet permission\nrequirements\nDescription: An input validation issue was addressed with improved\ninput validation. \nCVE-2021-30763: Zachary Keffaber (@QuickUpdate5)\n\nAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30781: tr3e\n\nAVEVideoEncoder\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30748: George Nosenko\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30775: JunDong Xie of Ant Security Light-Year Lab\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Playing a malicious audio file may lead to an unexpected\napplication termination\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30776: JunDong Xie of Ant Security Light-Year Lab\n\nCoreGraphics\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2021-30786: ryuzaki\n\nCoreText\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30789: Mickey Jin (@patch1t) of Trend Micro, Sunglin of\nKnownsec 404 team\n\nCrash Reporter\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30774: Yizhuo Wang of Group of Software Security In\nProgress (G.O.S.S.I.P) at Shanghai Jiao Tong University\n\nCVMS\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30780: Tim Michaud(@TimGMichaud) of Zoom Video\nCommunications\n\ndyld\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30768: Linus Henze (pinauten.de)\n\nFind My\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to access Find My data\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30804: Csaba Fitzl (@theevilbit) of Offensive Security\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2021-30760: Sunglin of Knownsec 404 team\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted tiff file may lead to a\ndenial-of-service or potentially disclose memory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30788: tr3e working with Trend Micro Zero Day Initiative\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: A stack overflow was addressed with improved input\nvalidation. \nCVE-2021-30759: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nIdentity Service\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass code signing\nchecks\nDescription: An issue in code signature validation was addressed with\nimproved checks. \nCVE-2021-30773: Linus Henze (pinauten.de)\n\nImage Processing\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30802: Matthew Denton of Google Chrome Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30779: Jzhu, Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2021-30785: CFF of Topsec Alpha Team, Mickey Jin (@patch1t) of\nTrend Micro\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30769: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker that has already achieved kernel code execution\nmay be able to bypass kernel memory mitigations\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30770: Linus Henze (pinauten.de)\n\nlibxml2\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-3518\n\nMeasure\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Multiple issues in libwebp\nDescription: Multiple issues were addressed by updating to version\n1.2.0. \nCVE-2018-25010\nCVE-2018-25011\nCVE-2018-25014\nCVE-2020-36328\nCVE-2020-36329\nCVE-2020-36330\nCVE-2020-36331\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30796: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-30792: Anonymous working with Trend Micro Zero Day\nInitiative\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted file may disclose user\ninformation\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30791: Anonymous working with Trend Micro Zero Day\nInitiative\n\nTCC\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30798: Mickey Jin (@patch1t) of Trend Micro\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30758: Christoph Guttandin of Media Codings\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30795: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30797: Ivan Fratric of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30799: Sergei Glazunov of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Joining a malicious Wi-Fi network may result in a denial of\nservice or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30800: vm_call, Nozhdar Abdulkhaleq Shukri\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nCoreText\nWe would like to acknowledge Mickey Jin (@patch1t) of Trend Micro for\ntheir assistance. \n\nSafari\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"14.7\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmD4r8YACgkQZcsbuWJ6\njjB5LBAAkEy25fNpo8rg42bsyJwWsSQQxPN79JFxQ6L8tqdsM+MZk86dUKtsRQ47\nmxarMf4uBwiIOtrGSCGHLIxXAzLqPY47NDhO+ls0dVxGMETkoR/287AeLnw2ITh3\nDM0H/pco4hRhPh8neYTMjNPMAgkepx+r7IqbaHWapn42nRC4/2VkEtVGltVDLs3L\nK0UQP0cjy2w9KvRF33H3uKNCaCTJrVkDBLKWC7rPPpomwp3bfmbQHjs0ixV5Y8l5\n3MfNmCuhIt34zAjVELvbE/PUXgkmsECbXHNZOct7ZLAbceneVKtSmynDtoEN0ajM\nJiJ6j+FCtdfB3xHk3cHqB6sQZm7fDxdK3z91MZvSZwwmdhJeHD/TxcItRlHNOYA1\nFSi0Q954DpIqz3Fs4DGE7Vwz0g5+o5qup8cnw9oLXBdqZwWANuLsQlHlioPbcDhl\nr1DmwtghmDYFUeSMnzHu/iuRepEju+BRMS3ybCm5j+I3kyvAV8pyvqNNRLfJn+w+\nWl/lwXTtXbgsNPR7WJCBJffxB0gOGZaIG1blSGCY89t2if0vD95R5sRsrnaxuqWc\nqmtRdBfbmjxk/G+6t1sd4wFglTNovHiLIHXh17cwdIWMB35yFs7VA35833/rF4Oo\njOF1D12o58uAewxAsK+cTixe7I9U5Awkad2Jz19V3qHnRWGqtVg\\x8e1h\n-----END PGP SIGNATURE-----\n\n\n. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nAll OpenShift Container Platform 4.6 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1813344 - CVE-2020-7598 nodejs-minimist: prototype pollution allows adding or modifying properties of Object.prototype using a constructor or __proto__ payload\n1979134 - Placeholder bug for OCP 4.6.0 extras release\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: libwebp security update\nAdvisory ID: RHSA-2021:2260-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2260\nIssue date: 2021-06-07\nCVE Names: CVE-2018-25011 CVE-2020-36328 CVE-2020-36329 \n=====================================================================\n\n1. Summary:\n\nAn update for libwebp is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe libwebp packages provide a library and tools for the WebP graphics\nformat. WebP is an image format with a lossy compression of digital\nphotographic images. WebP consists of a codec based on the VP8 format, and\na container based on the Resource Interchange File Format (RIFF). \nWebmasters, web developers and browser developers can use WebP to compress,\narchive, and distribute digital images more efficiently. \n\nSecurity Fix(es):\n\n* libwebp: heap-based buffer overflow in PutLE16() (CVE-2018-25011)\n\n* libwebp: heap-based buffer overflow in WebPDecode*Into functions\n(CVE-2020-36328)\n\n* libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c\n(CVE-2020-36329)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956829 - CVE-2020-36328 libwebp: heap-based buffer overflow in WebPDecode*Into functions\n1956843 - CVE-2020-36329 libwebp: use-after-free in EmitFancyRGB() in dec/io_dec.c\n1956919 - CVE-2018-25011 libwebp: heap-based buffer overflow in PutLE16()\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nppc64:\nlibwebp-0.3.0-10.el7_9.ppc.rpm\nlibwebp-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm\n\nppc64le:\nlibwebp-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm\n\ns390x:\nlibwebp-0.3.0-10.el7_9.s390.rpm\nlibwebp-0.3.0-10.el7_9.s390x.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-java-0.3.0-10.el7_9.ppc64.rpm\nlibwebp-tools-0.3.0-10.el7_9.ppc64.rpm\n\nppc64le:\nlibwebp-debuginfo-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-devel-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-java-0.3.0-10.el7_9.ppc64le.rpm\nlibwebp-tools-0.3.0-10.el7_9.ppc64le.rpm\n\ns390x:\nlibwebp-debuginfo-0.3.0-10.el7_9.s390.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.s390x.rpm\nlibwebp-devel-0.3.0-10.el7_9.s390.rpm\nlibwebp-devel-0.3.0-10.el7_9.s390x.rpm\nlibwebp-java-0.3.0-10.el7_9.s390x.rpm\nlibwebp-tools-0.3.0-10.el7_9.s390x.rpm\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nlibwebp-0.3.0-10.el7_9.src.rpm\n\nx86_64:\nlibwebp-0.3.0-10.el7_9.i686.rpm\nlibwebp-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nlibwebp-debuginfo-0.3.0-10.el7_9.i686.rpm\nlibwebp-debuginfo-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-devel-0.3.0-10.el7_9.i686.rpm\nlibwebp-devel-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-java-0.3.0-10.el7_9.x86_64.rpm\nlibwebp-tools-0.3.0-10.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25011\nhttps://access.redhat.com/security/cve/CVE-2020-36328\nhttps://access.redhat.com/security/cve/CVE-2020-36329\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYL4OxtzjgjWX9erEAQi1Yw//ZajpWKH7bKTBXifw2DXrc61fOReKCwR9\nsQ/djSkMMo+hwhFNtqq9zHDmI81tuOzBRgzA0FzA6qeNZGzsJmNX/RrNgnep9um7\nX08Dvb6+5VuHWBrrBv26wV5wGq/t2VKgGXSoJi6CDDDRlLn/RiAJzuZqhdhp3Ijn\nxBHIDIEYoNTYoDvbvZUVhY1kRKJ2sr3UxjcWPqDCNZdu51Z8ssW5up/Uh3NaY8yv\niB7PIoIHrtBD0nGQcy5h4qE47wFbe9RdLTOaqGDAGaOrHWWT56eC72YnCYKMxO4K\n8X9EXjhEmmH4a4Pl4dND7D1wiiOQe5kSA8IhYdgHVZQyo9WBJTD6g6C5IERwwjat\ns3Z7vhzA+/cLEo8+Jc5orRGoLArU5rOl4uqh64AEPaON9UB8bMOnqm24y+Ebyi0B\nS+zZ2kQ1FGeQIMnrjAer3OUcVnf26e6qNWBK+HCjdfmbhgtZxTtXyOKcM4lSFVcm\nLY8pLMWzZpcSCpYh15YtRRCWr4bJyX1UD8V3l2Zzek9zmFq5ogVX78KBYV3c4oWn\nReVMDEpXb3bYoV/EsMk7WOaDBKM1eU2OjVp2e7r2Fnt8GESxSpZ1pKegkxXdPnmX\nEmPhXKZNnwh4Z4Aw2AYIsQVo9QTyvCnZjfjAy9WfIqbyg8OTGJOeQqQLlKsq6ddb\nYXjUcIgJv2g=\n=kWSg\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4930-1 security@debian.org\nhttps://www.debian.org/security/ Moritz Muehlenhoff\nJune 10, 2021 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : libwebp\nCVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 \n CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 \n CVE-2020-36331 CVE-2020-36332\n\nMultiple vulnerabilities were discovered in libwebp, the implementation\nof the WebP image format, which could result in denial of service, memory\ndisclosure or potentially the execution of arbitrary code if malformed\nimages are processed. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 0.6.1-2+deb10u1. \n\nWe recommend that you upgrade your libwebp packages", "sources": [ { "db": "NVD", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "VULHUB", "id": "VHN-391908" }, { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "PACKETSTORM", "id": "169076" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-36329", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "163058", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163504", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162998", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2018-016581", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163028", "trust": 0.7 }, { "db": "CNNVD", "id": "CNNVD-202105-1393", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163645", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2021072216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061420", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060725", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060939", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021071517", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1965", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1880", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1959", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2485.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2388", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2036", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2070", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1914", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163061", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "163029", "trust": 0.2 }, { "db": "VULHUB", "id": "VHN-391908", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-36329", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169076", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391908" }, { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "id": "VAR-202105-1460", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391908" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:44:13.974000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a01956843", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "title": "libwebp Remediation of resource management error vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=151884" }, { "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "CNNVD", "id": "CNNVD-202105-1393" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-416", "trust": 1.1 }, { "problemtype": "Use of freed memory (CWE-416) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391908" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329" }, { "trust": 1.9, "url": "https://www.debian.org/security/2021/dsa-4930" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht212601" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956843" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20211112-0001/" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2021/jul/54" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1959" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163028/red-hat-security-advisory-2021-2328-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libwebp-five-vulnerabilities-35580" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1965" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163504/red-hat-security-advisory-2021-2643-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162998/red-hat-security-advisory-2021-2260-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1914" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163058/red-hat-security-advisory-2021-2365-01.html" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht212601" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060939" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1880" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061420" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021071517" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2036" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2102" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2388" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2070" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-36329" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-36328" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2018-25011" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/416.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://support.apple.com/ht212601." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30768" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30781" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30780" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30759" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30789" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30775" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30748" }, { "trust": 0.1, "url": "https://www.apple.com/itunes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30763" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30760" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30770" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30769" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7598" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3570" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhba-2021:2641" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7598" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2643" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3570" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3583" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2260" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2354" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2365" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36332" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/libwebp" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391908" }, { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391908" }, { "db": "VULMON", "id": "CVE-2020-36329" }, { "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "db": "PACKETSTORM", "id": "163645" }, { "db": "PACKETSTORM", "id": "163504" }, { "db": "PACKETSTORM", "id": "162998" }, { "db": "PACKETSTORM", "id": "163029" }, { "db": "PACKETSTORM", "id": "163058" }, { "db": "PACKETSTORM", "id": "163061" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "db": "NVD", "id": "CVE-2020-36329" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-21T00:00:00", "db": "VULHUB", "id": "VHN-391908" }, { "date": "2021-05-21T00:00:00", "db": "VULMON", "id": "CVE-2020-36329" }, { "date": "2022-01-27T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "date": "2021-07-23T15:29:39", "db": "PACKETSTORM", "id": "163645" }, { "date": "2021-07-14T15:29:37", "db": "PACKETSTORM", "id": "163504" }, { "date": "2021-06-07T13:58:06", "db": "PACKETSTORM", "id": "162998" }, { "date": "2021-06-09T13:22:14", "db": "PACKETSTORM", "id": "163029" }, { "date": "2021-06-10T13:39:19", "db": "PACKETSTORM", "id": "163058" }, { "date": "2021-06-10T13:42:06", "db": "PACKETSTORM", "id": "163061" }, { "date": "2021-06-28T19:12:00", "db": "PACKETSTORM", "id": "169076" }, { "date": "2021-05-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "date": "2021-05-21T17:15:08.313000", "db": "NVD", "id": "CVE-2020-36329" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-391908" }, { "date": "2021-07-23T00:00:00", "db": "VULMON", "id": "CVE-2020-36329" }, { "date": "2022-01-27T09:03:00", "db": "JVNDB", "id": "JVNDB-2018-016581" }, { "date": "2022-03-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1393" }, { "date": "2023-01-09T16:41:59.350000", "db": "NVD", "id": "CVE-2020-36329" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1393" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libwebp\u00a0 Vulnerabilities in the use of freed memory", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016581" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "resource management error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1393" } ], "trust": 0.6 } }
var-201902-0192
Vulnerability from variot
If an application encounters a fatal protocol error and then calls SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. In order for this to be exploitable "non-stitched" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). Fixed in OpenSSL 1.0.2r (Affected 1.0.2-1.0.2q). OpenSSL Contains an information disclosure vulnerability.Information may be obtained. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. A vulnerability in OpenSSL could allow an unauthenticated, remote malicious user to access sensitive information on a targeted system. An attacker who is able to perform man-in-the-middle attacks could exploit the vulnerability by persuading a user to access a link that submits malicious input to the affected software. A successful exploit could allow the malicious user to intercept and modify the browser requests and then observe the server behavior in order to conduct a padding oracle attack and decrypt sensitive information. The appliance is available to download as an OVA file from the Customer Portal. ========================================================================== Ubuntu Security Notice USN-4376-2 July 09, 2020
openssl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 14.04 ESM
- Ubuntu 12.04 ESM
Summary:
Several security issues were fixed in OpenSSL. This update provides the corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM.
Original advisory details:
Cesar Pereida Garc\xeda, Sohaib ul Hassan, Nicola Tuveri, Iaroslav Gridin, Alejandro Cabrera Aldaya, and Billy Brumley discovered that OpenSSL incorrectly handled ECDSA signatures. An attacker could possibly use this issue to perform a timing side-channel attack and recover private ECDSA keys. A remote attacker could possibly use this issue to decrypt data. (CVE-2019-1559)
Bernd Edlinger discovered that OpenSSL incorrectly handled certain decryption functions. (CVE-2019-1563)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 14.04 ESM: libssl1.0.0 1.0.1f-1ubuntu2.27+esm1
Ubuntu 12.04 ESM: libssl1.0.0 1.0.1-4ubuntu5.44
After a standard system update you need to reboot your computer to make all the necessary changes. Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: openssl security and bug fix update Advisory ID: RHSA-2019:2304-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2019:2304 Issue date: 2019-08-06 CVE Names: CVE-2018-0734 CVE-2019-1559 ==================================================================== 1. Summary:
An update for openssl is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.
Security Fix(es):
-
openssl: 0-byte record padding oracle (CVE-2019-1559)
-
openssl: timing side channel attack in the DSA signature algorithm (CVE-2018-0734)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.7 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.
- Bugs fixed (https://bugzilla.redhat.com/):
1644364 - CVE-2018-0734 openssl: timing side channel attack in the DSA signature algorithm 1649568 - openssl: microarchitectural and timing side channel padding oracle attack against RSA 1683804 - CVE-2019-1559 openssl: 0-byte record padding oracle
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: openssl-1.0.2k-19.el7.src.rpm
x86_64: openssl-1.0.2k-19.el7.x86_64.rpm openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-libs-1.0.2k-19.el7.i686.rpm openssl-libs-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-devel-1.0.2k-19.el7.i686.rpm openssl-devel-1.0.2k-19.el7.x86_64.rpm openssl-perl-1.0.2k-19.el7.x86_64.rpm openssl-static-1.0.2k-19.el7.i686.rpm openssl-static-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: openssl-1.0.2k-19.el7.src.rpm
x86_64: openssl-1.0.2k-19.el7.x86_64.rpm openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-libs-1.0.2k-19.el7.i686.rpm openssl-libs-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-devel-1.0.2k-19.el7.i686.rpm openssl-devel-1.0.2k-19.el7.x86_64.rpm openssl-perl-1.0.2k-19.el7.x86_64.rpm openssl-static-1.0.2k-19.el7.i686.rpm openssl-static-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: openssl-1.0.2k-19.el7.src.rpm
ppc64: openssl-1.0.2k-19.el7.ppc64.rpm openssl-debuginfo-1.0.2k-19.el7.ppc.rpm openssl-debuginfo-1.0.2k-19.el7.ppc64.rpm openssl-devel-1.0.2k-19.el7.ppc.rpm openssl-devel-1.0.2k-19.el7.ppc64.rpm openssl-libs-1.0.2k-19.el7.ppc.rpm openssl-libs-1.0.2k-19.el7.ppc64.rpm
ppc64le: openssl-1.0.2k-19.el7.ppc64le.rpm openssl-debuginfo-1.0.2k-19.el7.ppc64le.rpm openssl-devel-1.0.2k-19.el7.ppc64le.rpm openssl-libs-1.0.2k-19.el7.ppc64le.rpm
s390x: openssl-1.0.2k-19.el7.s390x.rpm openssl-debuginfo-1.0.2k-19.el7.s390.rpm openssl-debuginfo-1.0.2k-19.el7.s390x.rpm openssl-devel-1.0.2k-19.el7.s390.rpm openssl-devel-1.0.2k-19.el7.s390x.rpm openssl-libs-1.0.2k-19.el7.s390.rpm openssl-libs-1.0.2k-19.el7.s390x.rpm
x86_64: openssl-1.0.2k-19.el7.x86_64.rpm openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-devel-1.0.2k-19.el7.i686.rpm openssl-devel-1.0.2k-19.el7.x86_64.rpm openssl-libs-1.0.2k-19.el7.i686.rpm openssl-libs-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: openssl-debuginfo-1.0.2k-19.el7.ppc.rpm openssl-debuginfo-1.0.2k-19.el7.ppc64.rpm openssl-perl-1.0.2k-19.el7.ppc64.rpm openssl-static-1.0.2k-19.el7.ppc.rpm openssl-static-1.0.2k-19.el7.ppc64.rpm
ppc64le: openssl-debuginfo-1.0.2k-19.el7.ppc64le.rpm openssl-perl-1.0.2k-19.el7.ppc64le.rpm openssl-static-1.0.2k-19.el7.ppc64le.rpm
s390x: openssl-debuginfo-1.0.2k-19.el7.s390.rpm openssl-debuginfo-1.0.2k-19.el7.s390x.rpm openssl-perl-1.0.2k-19.el7.s390x.rpm openssl-static-1.0.2k-19.el7.s390.rpm openssl-static-1.0.2k-19.el7.s390x.rpm
x86_64: openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-perl-1.0.2k-19.el7.x86_64.rpm openssl-static-1.0.2k-19.el7.i686.rpm openssl-static-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: openssl-1.0.2k-19.el7.src.rpm
x86_64: openssl-1.0.2k-19.el7.x86_64.rpm openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-devel-1.0.2k-19.el7.i686.rpm openssl-devel-1.0.2k-19.el7.x86_64.rpm openssl-libs-1.0.2k-19.el7.i686.rpm openssl-libs-1.0.2k-19.el7.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-19.el7.i686.rpm openssl-debuginfo-1.0.2k-19.el7.x86_64.rpm openssl-perl-1.0.2k-19.el7.x86_64.rpm openssl-static-1.0.2k-19.el7.i686.rpm openssl-static-1.0.2k-19.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-0734 https://access.redhat.com/security/cve/CVE-2019-1559 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2019 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBXUl3otzjgjWX9erEAQgZQQ//XNcjRJGLVmjAzbVGiwxEqfFUvDVNiu97 fW0vLXuV9TnQTveOVqOAWmmMv2iShkVIRPDvzlOfUsYrrDEYHKr0N38R/fhDEZsM WQrJh54WK9IjEGNevLTCePKMhVuII1WnHrLDwZ6hxYGdcap/sJrf+N428b5LvHbM B39vWl3vqJYXoiI5dmIYL8ko2SfLms5Cg+dR0hLrNohf9gK2La+jhWb/j2xw6X6q /LXw5+hi/G+USbnNFfjt9G0fNjMMZRX2bukUvY6UWJRYTOXpIUOFqqp5w9zgM7tZ uX7TMTC9xe6te4mBCAFDdt+kYYLYSHfSkFlFq+S7V0MY8DmnIzqBJE4lJIDTVp9F JbrMIPs9G5jdnzPUKZw/gH9WLgka8Q8AYI+KA2xSxFX9VZ20Z+EDDC9/4uwj3i0A gLeIB68OwD70jn4sjuQqizr7TCviQhTUoKVd/mTBAxSEFZLcE8Sy/BEYxLPm81z0 veL16l6pmfg9uLac4V576ImfYNWlBEnJspA5E9K5CqQRPuZpCQFov7/D17Qm8v/x IcVKUaXiGquBwzHmIsD5lTCpl7CrGoU1PfNJ6Y/4xrVFOh1DLA4y6nnfysyO9eZx zBfuYS2VmfIq/tp1CjagI/DmJC4ezXeE4Phq9jm0EBASXtnLzVmc5j7kkqWjCcfm BtpJTAdr1kE=7kKR -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . These packages include redhat-release-virtualization-host, ovirt-node, and rhev-hypervisor. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks.
The following packages have been upgraded to a later upstream version: imgbased (1.1.9), ovirt-node-ng (4.3.5), redhat-release-virtualization-host (4.3.5), redhat-virtualization-host (4.3.5). Bugs fixed (https://bugzilla.redhat.com/):
1640820 - CVE-2018-16838 sssd: improper implementation of GPOs due to too restrictive permissions
1658366 - CVE-2018-16881 rsyslog: imptcp: integer overflow when Octet-Counted TCP Framing is enabled
1683804 - CVE-2019-1559 openssl: 0-byte record padding oracle
1687920 - RHVH fails to reinstall if required size is exceeding the available disk space due to anaconda bug
1694065 - CVE-2019-0161 edk2: stack overflow in XHCI causing denial of service
1702223 - Rebase RHV-H on RHEL 7.7
1709829 - CVE-2019-10139 cockpit-ovirt: admin and appliance passwords saved in plain text variable file during HE deployment
1718388 - CVE-2019-10160 python: regression of CVE-2019-9636 due to functional fix to allow port numbers in netloc
1720156 - RHVH 4.3.4 version info is incorrect in plymouth and "/etc/os-release"
1720160 - RHVH 4.3.4: Incorrect info in /etc/system-release-cpe
1720310 - RHV-H post-installation scripts failing, due to existing tags
1720434 - RHVH 7.7 brand is wrong in Anaconda GUI.
1720435 - Failed to install RHVH 7.7
1720436 - RHVH 7.7 should based on RHEL 7.7 server but not workstation.
1724044 - Failed dependencies occur during install systemtap package.
1726534 - dhclient fails to load libdns-export.so.1102 after upgrade if the user installed library is not persisted on the new layer
1727007 - Update RHVH 7.7 branding with new Red Hat logo
1727859 - Failed to boot after upgrading a host with a custom kernel
1728998 - "nodectl info" displays error after RHVH installation
1729023 - The error message is inappropriate when run imgbase layout --init
on current layout
This issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt. It was reported to OpenSSL on 10th December 2018.
Note: Advisory updated to make it clearer that AEAD ciphersuites are not impacted.
Note
OpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support for 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th September 2019. Users of these versions should upgrade to OpenSSL 1.1.1.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20190226.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201902-0192", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "big-ip advanced firewall manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "jd edwards enterpriseone tools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.2" }, { "model": "big-ip fraud protection service", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip webaccelerator", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "big-ip edge gateway", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "communications diameter signaling router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "6.9.0" }, { "model": "web gateway", "scope": "gte", "trust": 1.0, "vendor": "mcafee", "version": "7.0.0" }, { "model": "big-ip access policy manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "hyper converged infrastructure", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip fraud protection service", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "big-ip global traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip domain name system", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "big-ip edge gateway", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip analytics", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "big-ip link controller", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-iq centralized management", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "7.1.0" }, { "model": "big-ip analytics", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "services tools bundle", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.2" }, { "model": "a800", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise manager base platform", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.1.0.5.0" }, { "model": "pan-os", "scope": "gte", "trust": 1.0, "vendor": "paloaltonetworks", "version": "7.1.0" }, { "model": "storagegrid", "scope": "gte", "trust": 1.0, "vendor": "netapp", "version": "9.0.0" }, { "model": "a220", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip application acceleration manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "pan-os", "scope": "lt", "trust": 1.0, "vendor": "paloaltonetworks", "version": "8.0.20" }, { "model": "communications diameter signaling router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.1" }, { "model": "communications performance intelligence center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "10.4.0.2" }, { "model": "big-ip application security manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "communications diameter signaling router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.4" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.4" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.55" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.3" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "traffix signaling delivery controller", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "5.0.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "30" }, { "model": "mysql enterprise monitor", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "4.0.8" }, { "model": "big-ip analytics", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.04" }, { "model": "big-ip edge gateway", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip fraud protection service", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "big-ip webaccelerator", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip link controller", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "virtualization", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "4.0" }, { "model": "big-ip policy enforcement manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip fraud protection service", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip advanced firewall manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip application acceleration manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "6.0.0" }, { "model": "enterprise linux server", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "endeca server", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.7.0" }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.3.3" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.56" }, { "model": "big-ip local traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "leap", "scope": "eq", "trust": 1.0, "vendor": "opensuse", "version": "15.1" }, { "model": "big-ip application acceleration manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "communications diameter signaling router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "mysql", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "big-ip access policy manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip link controller", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip webaccelerator", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "c190", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "agent", "scope": "lte", "trust": 1.0, "vendor": "mcafee", "version": "5.6.4" }, { "model": "traffix signaling delivery controller", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "5.1.0" }, { "model": "mysql", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.6.43" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "8.9.0" }, { "model": "big-ip edge gateway", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "29" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.2" }, { "model": "big-ip link controller", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "threat intelligence exchange server", "scope": "lt", "trust": 1.0, "vendor": "mcafee", "version": "3.0.0" }, { "model": "enterprise linux desktop", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "big-ip application acceleration manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip domain name system", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip edge gateway", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "big-ip analytics", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-iq centralized management", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "7.0.0" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.3" }, { "model": "big-ip domain name system", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "mysql", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "5.7.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "6.17.0" }, { "model": "big-ip application acceleration manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip access policy manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "communications unified session manager", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.2.5" }, { "model": "big-ip fraud protection service", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip global traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip local traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "big-ip advanced firewall manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "enterprise linux workstation", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "6.0" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0" }, { "model": "nessus", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "8.2.3" }, { "model": "jboss enterprise web server", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "5.0.0" }, { "model": "traffix signaling delivery controller", "scope": "eq", "trust": 1.0, "vendor": "f5", "version": "4.4.0" }, { "model": "big-ip policy enforcement manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip access policy manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip application security manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip global traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip local traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "pan-os", "scope": "lt", "trust": 1.0, "vendor": "paloaltonetworks", "version": "8.1.8" }, { "model": "big-ip edge gateway", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "big-ip local traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip edge gateway", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "big-ip advanced firewall manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip webaccelerator", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "big-ip link controller", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "pan-os", "scope": "gte", "trust": 1.0, "vendor": "paloaltonetworks", "version": "8.0.0" }, { "model": "pan-os", "scope": "gte", "trust": 1.0, "vendor": "paloaltonetworks", "version": "9.0.0" }, { "model": "big-ip domain name system", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "leap", "scope": "eq", "trust": 1.0, "vendor": "opensuse", "version": "42.3" }, { "model": "virtualization host", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "4.0" }, { "model": "big-ip domain name system", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "altavault", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip advanced firewall manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "big-ip application acceleration manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip global traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip application security manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "big-ip webaccelerator", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip policy enforcement manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "big-ip analytics", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "11.1.1.9.0" }, { "model": "big-ip access policy manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.3" }, { "model": "big-ip global traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.3.1" }, { "model": "mysql enterprise monitor", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.14" }, { "model": "mysql", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "pan-os", "scope": "lt", "trust": 1.0, "vendor": "paloaltonetworks", "version": "9.0.2" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "big-ip link controller", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.3.0" }, { "model": "communications unified session manager", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.3.5" }, { "model": "big-ip edge gateway", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "oncommand unified manager core package", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip link controller", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip webaccelerator", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip application security manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "enterprise manager base platform", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.2.0.0.0" }, { "model": "big-ip policy enforcement manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "14.1.2" }, { "model": "enterprise linux server", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "6.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "8.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "16.04" }, { "model": "snapdrive", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "storage automation store", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip application security manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "communications session router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.1" }, { "model": "smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "pan-os", "scope": "gte", "trust": 1.0, "vendor": "paloaltonetworks", "version": "8.1.0" }, { "model": "threat intelligence exchange server", "scope": "gte", "trust": 1.0, "vendor": "mcafee", "version": "2.0.0" }, { "model": "oncommand unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-iq centralized management", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "6.0.0" }, { "model": "snapprotect", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip access policy manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "big-ip advanced firewall manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "big-ip fraud protection service", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "storagegrid", "scope": "lte", "trust": 1.0, "vendor": "netapp", "version": "9.0.4" }, { "model": "big-ip global traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "big-ip local traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "fas2750", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip advanced firewall manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "api gateway", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "11.1.2.4.0" }, { "model": "secure global desktop", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.4" }, { "model": "big-ip domain name system", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "6.8.1" }, { "model": "fas2720", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip access policy manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "web gateway", "scope": "lt", "trust": 1.0, "vendor": "mcafee", "version": "9.0.0" }, { "model": "big-iq centralized management", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "6.1.0" }, { "model": "big-ip analytics", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "active iq unified manager", "scope": "gte", "trust": 1.0, "vendor": "netapp", "version": "9.5" }, { "model": "big-ip fraud protection service", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip global traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "enterprise linux desktop", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "6.0" }, { "model": "big-ip policy enforcement manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "communications diameter signaling router", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.2" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.1.0" }, { "model": "steelstore cloud integrated storage", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip link controller", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "storagegrid", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "big-ip application security manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "mysql workbench", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.16" }, { "model": "data exchange layer", "scope": "gte", "trust": 1.0, "vendor": "mcafee", "version": "4.0.0" }, { "model": "big-ip application acceleration manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip local traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "big-ip webaccelerator", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "big-ip access policy manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip domain name system", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "13.0.0" }, { "model": "big-ip fraud protection service", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip global traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip local traffic manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip advanced firewall manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip application security manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.2" }, { "model": "mysql", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "5.6.0" }, { "model": "mysql enterprise monitor", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "big-ip webaccelerator", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "cn1610", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "8.15.1" }, { "model": "big-ip analytics", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "14.0.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.10" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "8.0.0" }, { "model": "mysql", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.25" }, { "model": "a320", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip policy enforcement manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "13.1.3" }, { "model": "service processor", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "big-ip domain name system", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "big-ip analytics", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "12.1.0" }, { "model": "data exchange layer", "scope": "lt", "trust": 1.0, "vendor": "mcafee", "version": "6.0.0" }, { "model": "big-ip application security manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.4" }, { "model": "big-ip application acceleration manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "12.1.5" }, { "model": "agent", "scope": "gte", "trust": 1.0, "vendor": "mcafee", "version": "5.6.0" }, { "model": "leap", "scope": "eq", "trust": 1.0, "vendor": "opensuse", "version": "15.0" }, { "model": "active iq unified manager", "scope": "gte", "trust": 1.0, "vendor": "netapp", "version": "7.3" }, { "model": "big-ip policy enforcement manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "pan-os", "scope": "lt", "trust": 1.0, "vendor": "paloaltonetworks", "version": "7.1.15" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "31" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "8.8.1" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2r" }, { "model": "big-ip local traffic manager", "scope": "gte", "trust": 1.0, "vendor": "f5", "version": "15.0.0" }, { "model": "enterprise manager base platform", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.3.0.0.0" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0" }, { "model": "enterprise linux workstation", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "7.0" }, { "model": "big-ip policy enforcement manager", "scope": "lte", "trust": 1.0, "vendor": "f5", "version": "15.1.0" }, { "model": "jp1/snmp system observer", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "steelstore cloud integrated storage", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "oncommand workflow automation", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/operations analytics", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "job management system partern 1/automatic job management system 3", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "storagegrid webscale", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "nessus", "scope": null, "trust": 0.8, "vendor": "tenable", "version": null }, { "model": "ucosminexus service architect", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "leap", "scope": null, "trust": 0.8, "vendor": "opensuse", "version": null }, { "model": "jp1/automatic job management system 3", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "traffix sdc", "scope": null, "trust": 0.8, "vendor": "f5", "version": null }, { "model": "jp1/data highway", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null }, { "model": "ucosminexus primary server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ucosminexus developer", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ubuntu", "scope": null, "trust": 0.8, "vendor": "canonical", "version": null }, { "model": "ucosminexus service platform", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "santricity smi-s provider", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/it desktop management 2", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "jp1/performance management", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "ontap select deploy", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "snapdrive", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "oncommand unified manager", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "jp1/automatic operation", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "cosminexus http server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null }, { "model": "hyper converged infrastructure", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "element software", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ucosminexus application server", "scope": null, "trust": 0.8, "vendor": "\u65e5\u7acb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2r", "versionStartIncluding": "1.0.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:esm:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:hyper_converged_infrastructure:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapdrive:-:*:*:*:*:unix:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storage_automation_store:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_unified_manager:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "9.0.4", "versionStartIncluding": "9.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_unified_manager:-:*:*:*:*:vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:service_processor:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapdrive:-:*:*:*:*:windows:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:*:*:*:*:*:windows:*:*", "cpe_name": [], "versionStartIncluding": "7.3", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:*:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "versionStartIncluding": "9.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapprotect:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_unified_manager_core_package:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:altavault:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:f5:traffix_signaling_delivery_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.1.0", "versionStartIncluding": "5.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:traffix_signaling_delivery_controller:4.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-iq_centralized_management:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "6.1.0", "versionStartIncluding": "6.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_local_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_local_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_local_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_advanced_firewall_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_advanced_firewall_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_advanced_firewall_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_acceleration_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_acceleration_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_acceleration_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_analytics:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_analytics:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_analytics:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_access_policy_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_access_policy_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_access_policy_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_security_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_security_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_security_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_edge_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_edge_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_edge_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_fraud_protection_service:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_fraud_protection_service:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_fraud_protection_service:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_global_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_global_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_global_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_link_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_link_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_link_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_policy_enforcement_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_policy_enforcement_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_policy_enforcement_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_webaccelerator:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_webaccelerator:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_webaccelerator:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_domain_name_system:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.1.5", "versionStartIncluding": "12.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_domain_name_system:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "13.1.3", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_domain_name_system:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.1.2", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_access_policy_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_advanced_firewall_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_analytics:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_acceleration_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_application_security_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_domain_name_system:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_edge_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_fraud_protection_service:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_global_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_link_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_local_traffic_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_policy_enforcement_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-ip_webaccelerator:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "15.1.0", "versionStartIncluding": "15.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:f5:big-iq_centralized_management:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.1.0", "versionStartIncluding": "7.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:nessus:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.2.3", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:opensuse:leap:42.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:opensuse:leap:15.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:cn1610_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:cn1610:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:a320_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:a320:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:c190_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:c190:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:a220_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:a220:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas2720_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas2720:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas2750_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas2750:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:a800_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:a800:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:29:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mcafee:data_exchange_layer:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.0.0", "versionStartIncluding": "4.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:agent:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.6.4", "versionStartIncluding": "5.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:threat_intelligence_exchange_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0.0", "versionStartIncluding": "2.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mcafee:web_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.0.0", "versionStartIncluding": "7.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:redhat:jboss_enterprise_web_server:5.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:6.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:redhat:virtualization:4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:redhat:virtualization_host:4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_desktop:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_workstation:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_desktop:6.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_server:6.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux_workstation:6.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:9.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:api_gateway:11.1.2.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:11.1.1.9.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.55:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.56:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.3.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:secure_global_desktop:5.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.15", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.25", "versionStartIncluding": "5.7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.4.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.6.43", "versionStartIncluding": "5.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.2.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:12.1.0.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_base_platform:13.3.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:services_tools_bundle:19.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router:8.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router:8.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router:8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_performance_intelligence_center:10.4.0.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.14", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "4.0.8", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.3.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:7.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:8.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:endeca_server:7.7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_workbench:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.16", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:8.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_router:8.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:7.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_diameter_signaling_router:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_unified_session_manager:7.3.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_unified_session_manager:8.2.5:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:paloaltonetworks:pan-os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.0.20", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:paloaltonetworks:pan-os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.1.8", "versionStartIncluding": "8.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:paloaltonetworks:pan-os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.0.2", "versionStartIncluding": "9.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:paloaltonetworks:pan-os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "7.1.15", "versionStartIncluding": "7.1.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "6.8.1", "versionStartIncluding": "6.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "8.8.1", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "6.17.0", "versionStartIncluding": "6.9.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "8.15.1", "versionStartIncluding": "8.9.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-1559" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt,Red Hat,Slackware Security Team,Juraj Somorovsky", "sources": [ { "db": "CNNVD", "id": "CNNVD-201902-956" } ], "trust": 0.6 }, "cve": "CVE-2019-1559", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 4.3, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2019-1559", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-147651", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2019-1559", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-1559", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-201902-956", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-147651", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2019-1559", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-147651" }, { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "CNNVD", "id": "CNNVD-201902-956" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "If an application encounters a fatal protocol error and then calls SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. In order for this to be exploitable \"non-stitched\" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). Fixed in OpenSSL 1.0.2r (Affected 1.0.2-1.0.2q). OpenSSL Contains an information disclosure vulnerability.Information may be obtained. The product supports a variety of encryption algorithms, including symmetric ciphers, hash algorithms, secure hash algorithms, etc. A vulnerability in OpenSSL could allow an unauthenticated, remote malicious user to access sensitive information on a targeted system. An attacker who is able to perform man-in-the-middle attacks could exploit the vulnerability by persuading a user to access a link that submits malicious input to the affected software. A successful exploit could allow the malicious user to intercept and modify the browser requests and then observe the server behavior in order to conduct a padding oracle attack and decrypt sensitive information. The appliance is available\nto download as an OVA file from the Customer Portal. ==========================================================================\nUbuntu Security Notice USN-4376-2\nJuly 09, 2020\n\nopenssl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in OpenSSL. This update provides\nthe corresponding update for Ubuntu 12.04 ESM and Ubuntu 14.04 ESM. \n\nOriginal advisory details:\n\n Cesar Pereida Garc\\xeda, Sohaib ul Hassan, Nicola Tuveri, Iaroslav Gridin,\n Alejandro Cabrera Aldaya, and Billy Brumley discovered that OpenSSL\n incorrectly handled ECDSA signatures. An attacker could possibly use this\n issue to perform a timing side-channel attack and recover private ECDSA\n keys. A remote attacker could possibly use this issue to decrypt\n data. (CVE-2019-1559)\n\n Bernd Edlinger discovered that OpenSSL incorrectly handled certain\n decryption functions. (CVE-2019-1563)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n libssl1.0.0 1.0.1f-1ubuntu2.27+esm1\n\nUbuntu 12.04 ESM:\n libssl1.0.0 1.0.1-4ubuntu5.44\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: openssl security and bug fix update\nAdvisory ID: RHSA-2019:2304-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2019:2304\nIssue date: 2019-08-06\nCVE Names: CVE-2018-0734 CVE-2019-1559\n====================================================================\n1. Summary:\n\nAn update for openssl is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: 0-byte record padding oracle (CVE-2019-1559)\n\n* openssl: timing side channel attack in the DSA signature algorithm\n(CVE-2018-0734)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.7 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1644364 - CVE-2018-0734 openssl: timing side channel attack in the DSA signature algorithm\n1649568 - openssl: microarchitectural and timing side channel padding oracle attack against RSA\n1683804 - CVE-2019-1559 openssl: 0-byte record padding oracle\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenssl-1.0.2k-19.el7.src.rpm\n\nx86_64:\nopenssl-1.0.2k-19.el7.x86_64.rpm\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-libs-1.0.2k-19.el7.i686.rpm\nopenssl-libs-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-devel-1.0.2k-19.el7.i686.rpm\nopenssl-devel-1.0.2k-19.el7.x86_64.rpm\nopenssl-perl-1.0.2k-19.el7.x86_64.rpm\nopenssl-static-1.0.2k-19.el7.i686.rpm\nopenssl-static-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nopenssl-1.0.2k-19.el7.src.rpm\n\nx86_64:\nopenssl-1.0.2k-19.el7.x86_64.rpm\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-libs-1.0.2k-19.el7.i686.rpm\nopenssl-libs-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-devel-1.0.2k-19.el7.i686.rpm\nopenssl-devel-1.0.2k-19.el7.x86_64.rpm\nopenssl-perl-1.0.2k-19.el7.x86_64.rpm\nopenssl-static-1.0.2k-19.el7.i686.rpm\nopenssl-static-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenssl-1.0.2k-19.el7.src.rpm\n\nppc64:\nopenssl-1.0.2k-19.el7.ppc64.rpm\nopenssl-debuginfo-1.0.2k-19.el7.ppc.rpm\nopenssl-debuginfo-1.0.2k-19.el7.ppc64.rpm\nopenssl-devel-1.0.2k-19.el7.ppc.rpm\nopenssl-devel-1.0.2k-19.el7.ppc64.rpm\nopenssl-libs-1.0.2k-19.el7.ppc.rpm\nopenssl-libs-1.0.2k-19.el7.ppc64.rpm\n\nppc64le:\nopenssl-1.0.2k-19.el7.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-19.el7.ppc64le.rpm\nopenssl-devel-1.0.2k-19.el7.ppc64le.rpm\nopenssl-libs-1.0.2k-19.el7.ppc64le.rpm\n\ns390x:\nopenssl-1.0.2k-19.el7.s390x.rpm\nopenssl-debuginfo-1.0.2k-19.el7.s390.rpm\nopenssl-debuginfo-1.0.2k-19.el7.s390x.rpm\nopenssl-devel-1.0.2k-19.el7.s390.rpm\nopenssl-devel-1.0.2k-19.el7.s390x.rpm\nopenssl-libs-1.0.2k-19.el7.s390.rpm\nopenssl-libs-1.0.2k-19.el7.s390x.rpm\n\nx86_64:\nopenssl-1.0.2k-19.el7.x86_64.rpm\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-devel-1.0.2k-19.el7.i686.rpm\nopenssl-devel-1.0.2k-19.el7.x86_64.rpm\nopenssl-libs-1.0.2k-19.el7.i686.rpm\nopenssl-libs-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenssl-debuginfo-1.0.2k-19.el7.ppc.rpm\nopenssl-debuginfo-1.0.2k-19.el7.ppc64.rpm\nopenssl-perl-1.0.2k-19.el7.ppc64.rpm\nopenssl-static-1.0.2k-19.el7.ppc.rpm\nopenssl-static-1.0.2k-19.el7.ppc64.rpm\n\nppc64le:\nopenssl-debuginfo-1.0.2k-19.el7.ppc64le.rpm\nopenssl-perl-1.0.2k-19.el7.ppc64le.rpm\nopenssl-static-1.0.2k-19.el7.ppc64le.rpm\n\ns390x:\nopenssl-debuginfo-1.0.2k-19.el7.s390.rpm\nopenssl-debuginfo-1.0.2k-19.el7.s390x.rpm\nopenssl-perl-1.0.2k-19.el7.s390x.rpm\nopenssl-static-1.0.2k-19.el7.s390.rpm\nopenssl-static-1.0.2k-19.el7.s390x.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-perl-1.0.2k-19.el7.x86_64.rpm\nopenssl-static-1.0.2k-19.el7.i686.rpm\nopenssl-static-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenssl-1.0.2k-19.el7.src.rpm\n\nx86_64:\nopenssl-1.0.2k-19.el7.x86_64.rpm\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-devel-1.0.2k-19.el7.i686.rpm\nopenssl-devel-1.0.2k-19.el7.x86_64.rpm\nopenssl-libs-1.0.2k-19.el7.i686.rpm\nopenssl-libs-1.0.2k-19.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-19.el7.i686.rpm\nopenssl-debuginfo-1.0.2k-19.el7.x86_64.rpm\nopenssl-perl-1.0.2k-19.el7.x86_64.rpm\nopenssl-static-1.0.2k-19.el7.i686.rpm\nopenssl-static-1.0.2k-19.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-0734\nhttps://access.redhat.com/security/cve/CVE-2019-1559\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2019 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBXUl3otzjgjWX9erEAQgZQQ//XNcjRJGLVmjAzbVGiwxEqfFUvDVNiu97\nfW0vLXuV9TnQTveOVqOAWmmMv2iShkVIRPDvzlOfUsYrrDEYHKr0N38R/fhDEZsM\nWQrJh54WK9IjEGNevLTCePKMhVuII1WnHrLDwZ6hxYGdcap/sJrf+N428b5LvHbM\nB39vWl3vqJYXoiI5dmIYL8ko2SfLms5Cg+dR0hLrNohf9gK2La+jhWb/j2xw6X6q\n/LXw5+hi/G+USbnNFfjt9G0fNjMMZRX2bukUvY6UWJRYTOXpIUOFqqp5w9zgM7tZ\nuX7TMTC9xe6te4mBCAFDdt+kYYLYSHfSkFlFq+S7V0MY8DmnIzqBJE4lJIDTVp9F\nJbrMIPs9G5jdnzPUKZw/gH9WLgka8Q8AYI+KA2xSxFX9VZ20Z+EDDC9/4uwj3i0A\ngLeIB68OwD70jn4sjuQqizr7TCviQhTUoKVd/mTBAxSEFZLcE8Sy/BEYxLPm81z0\nveL16l6pmfg9uLac4V576ImfYNWlBEnJspA5E9K5CqQRPuZpCQFov7/D17Qm8v/x\nIcVKUaXiGquBwzHmIsD5lTCpl7CrGoU1PfNJ6Y/4xrVFOh1DLA4y6nnfysyO9eZx\nzBfuYS2VmfIq/tp1CjagI/DmJC4ezXeE4Phq9jm0EBASXtnLzVmc5j7kkqWjCcfm\nBtpJTAdr1kE=7kKR\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. These packages include redhat-release-virtualization-host,\novirt-node, and rhev-hypervisor. RHVH features a Cockpit user\ninterface for monitoring the host\u0027s resources and performing administrative\ntasks. \n\nThe following packages have been upgraded to a later upstream version:\nimgbased (1.1.9), ovirt-node-ng (4.3.5), redhat-release-virtualization-host\n(4.3.5), redhat-virtualization-host (4.3.5). Bugs fixed (https://bugzilla.redhat.com/):\n\n1640820 - CVE-2018-16838 sssd: improper implementation of GPOs due to too restrictive permissions\n1658366 - CVE-2018-16881 rsyslog: imptcp: integer overflow when Octet-Counted TCP Framing is enabled\n1683804 - CVE-2019-1559 openssl: 0-byte record padding oracle\n1687920 - RHVH fails to reinstall if required size is exceeding the available disk space due to anaconda bug\n1694065 - CVE-2019-0161 edk2: stack overflow in XHCI causing denial of service\n1702223 - Rebase RHV-H on RHEL 7.7\n1709829 - CVE-2019-10139 cockpit-ovirt: admin and appliance passwords saved in plain text variable file during HE deployment\n1718388 - CVE-2019-10160 python: regression of CVE-2019-9636 due to functional fix to allow port numbers in netloc\n1720156 - RHVH 4.3.4 version info is incorrect in plymouth and \"/etc/os-release\"\n1720160 - RHVH 4.3.4: Incorrect info in /etc/system-release-cpe\n1720310 - RHV-H post-installation scripts failing, due to existing tags\n1720434 - RHVH 7.7 brand is wrong in Anaconda GUI. \n1720435 - Failed to install RHVH 7.7\n1720436 - RHVH 7.7 should based on RHEL 7.7 server but not workstation. \n1724044 - Failed dependencies occur during install systemtap package. \n1726534 - dhclient fails to load libdns-export.so.1102 after upgrade if the user installed library is not persisted on the new layer\n1727007 - Update RHVH 7.7 branding with new Red Hat logo\n1727859 - Failed to boot after upgrading a host with a custom kernel\n1728998 - \"nodectl info\" displays error after RHVH installation\n1729023 - The error message is inappropriate when run `imgbase layout --init` on current layout\n\n6. \n\nThis issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram,\nwith additional investigation by Steven Collison and Andrew Hourselt. It was\nreported to OpenSSL on 10th December 2018. \n\nNote: Advisory updated to make it clearer that AEAD ciphersuites are not impacted. \n\nNote\n====\n\nOpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support\nfor 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th\nSeptember 2019. Users of these versions should upgrade to OpenSSL 1.1.1. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20190226.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "VULHUB", "id": "VHN-147651" }, { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "PACKETSTORM", "id": "154009" }, { "db": "PACKETSTORM", "id": "158377" }, { "db": "PACKETSTORM", "id": "155413" }, { "db": "PACKETSTORM", "id": "151885" }, { "db": "PACKETSTORM", "id": "155415" }, { "db": "PACKETSTORM", "id": "153932" }, { "db": "PACKETSTORM", "id": "154008" }, { "db": "PACKETSTORM", "id": "169635" } ], "trust": 2.52 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-1559", "trust": 3.4 }, { "db": "TENABLE", "id": "TNS-2019-03", "trust": 1.8 }, { "db": "TENABLE", "id": "TNS-2019-02", "trust": 1.8 }, { "db": "MCAFEE", "id": "SB10282", "trust": 1.8 }, { "db": "BID", "id": "107174", "trust": 1.8 }, { "db": "JVNDB", "id": "JVNDB-2019-002098", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-201902-956", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "151886", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "158377", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "155415", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2019.4479.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3729", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.2383", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3462", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0487", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4083", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0620", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0751.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.4558", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0696", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0192", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.4479", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0032", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.4255", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.4297", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.0666", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.4405", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2019.3390.4", "trust": 0.6 }, { "db": "PULSESECURE", "id": "SA44019", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "151885", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "151918", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "154042", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-147651", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2019-1559", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "154009", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "155413", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "153932", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "154008", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169635", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-147651" }, { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "PACKETSTORM", "id": "154009" }, { "db": "PACKETSTORM", "id": "158377" }, { "db": "PACKETSTORM", "id": "155413" }, { "db": "PACKETSTORM", "id": "151885" }, { "db": "PACKETSTORM", "id": "155415" }, { "db": "PACKETSTORM", "id": "153932" }, { "db": "PACKETSTORM", "id": "154008" }, { "db": "PACKETSTORM", "id": "169635" }, { "db": "CNNVD", "id": "CNNVD-201902-956" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "id": "VAR-201902-0192", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-147651" } ], "trust": 0.36447732666666666 }, "last_update_date": "2024-07-23T20:34:36.580000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "hitachi-sec-2019-132 Software product security information", "trust": 0.8, "url": "https://usn.ubuntu.com/3899-1/" }, { "title": "OpenSSL Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=89673" }, { "title": "Red Hat: Moderate: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20192304 - security advisory" }, { "title": "Red Hat: Moderate: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20192471 - security advisory" }, { "title": "Ubuntu Security Notice: openssl, openssl1.0 vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-3899-1" }, { "title": "Debian Security Advisories: DSA-4400-1 openssl1.0 -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=675a6469b3fad3c9a56addc922ae8d9d" }, { "title": "Red Hat: Moderate: rhvm-appliance security, bug fix, and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20192439 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.2 security release", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20193929 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.2 security release", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20193931 - security advisory" }, { "title": "Red Hat: Important: Red Hat Virtualization security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20192437 - security advisory" }, { "title": "Red Hat: CVE-2019-1559", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2019-1559" }, { "title": "Arch Linux Advisories: [ASA-201903-2] openssl-1.0: information disclosure", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-201903-2" }, { "title": "Arch Linux Advisories: [ASA-201903-6] lib32-openssl-1.0: information disclosure", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-201903-6" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2019-1559" }, { "title": "Amazon Linux AMI: ALAS-2019-1188", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2019-1188" }, { "title": "Amazon Linux 2: ALAS2-2019-1362", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2019-1362" }, { "title": "Amazon Linux 2: ALAS2-2019-1188", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2019-1188" }, { "title": "IBM: IBM Security Bulletin: Vulnerability in OpenSSL affects IBM Spectrum Protect Backup-Archive Client NetApp Services (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=884ffe1be805ead0a804f06f7c14072c" }, { "title": "IBM: IBM Security Bulletin: IBM Security Proventia Network Active Bypass is affected by openssl vulnerabilities (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1092f7b64100b0110232688947fb97ed" }, { "title": "IBM: IBM Security Bulletin: Guardium StealthBits Integration is affected by an OpenSSL vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=6b4ff04f16b62df96980d37251dc9ae0" }, { "title": "IBM: IBM Security Bulletin: IBM InfoSphere Master Data Management Standard and Advanced Editions are affected by vulnerabilities in OpenSSL (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=7856a174f729c96cf2ba970cfef5f604" }, { "title": "IBM: IBM Security Bulletin: OpenSSL vulnerability affects IBM Spectrum Control (formerly Tivoli Storage Productivity Center) (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=04a72ac59f1cc3a5b02c155d941c5cfd" }, { "title": "IBM: IBM Security Bulletin: IBM DataPower Gateway is affected by a padding oracle vulnerability (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9c55c211aa2410823d4d568143afa117" }, { "title": "IBM: Security Bulletin: OpenSSL vulnerabilites impacting Aspera High-Speed Transfer Server, Aspera Desktop Client 3.9.1 and earlier (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=c233af3070d7248dcbafadb6b367e2a1" }, { "title": "IBM: IBM Security Bulletin: IBM QRadar Network Security is affected by openssl vulnerabilities (CVE-2019-1559, CVE-2018-0734)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=7ceb7cf440b088f91358d1c597d5a414" }, { "title": "IBM: IBM Security Bulletin: Vulnerability in OpenSSL affects IBM Rational ClearCase (CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=c0b11f80d1ecd798a97f3bda2b68f830" }, { "title": "IBM: IBM Security Bulletin: Vulnerability CVE-2019-1559 in OpenSSL affects IBM i", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=12860155d0bf31ea6e2e3ffcef7ea7e0" }, { "title": "IBM: IBM Security Bulletin: Vulnerability in OpenSSL affects AIX (CVE-2019-1559) Security Bulletin", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=2709308a62e1e2fafc2e4989ef440aa3" }, { "title": "IBM: IBM Security Bulletin: Multiple Vulnerabilities in OpenSSL affect IBM Worklight and IBM MobileFirst Platform Foundation", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1b873a45dce8bb56ff011908a9402b67" }, { "title": "IBM: IBM Security Bulletin: Node.js as used in IBM QRadar Packet Capture is vulnerable to the following CVE\u2019s (CVE-2019-1559, CVE-2019-5737, CVE-2019-5739)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=aae1f2192c5cf9375ed61f7a27d08f64" }, { "title": "IBM: IBM Security Bulletin: Multiple Security Vulnerabilities affect IBM Cloud Private (CVE-2019-5739 CVE-2019-5737 CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=8b00742d4b57e0eaab4fd3f9a2125634" }, { "title": "IBM: IBM Security Bulletin: Vulnerabilities in OpenSSL affect GCM16 \u0026 GCM32 and LCM8 \u0026 LCM16 KVM Switch Firmware (CVE-2018-0732 CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ca67e77b9edd2ad304d2f2da1853223f" }, { "title": "IBM: IBM Security Bulletin: Vulnerabilities in GNU OpenSSL (1.0.2 series) affect IBM Netezza Analytics", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ac5ccbde4e4ddbcabd10cacf82487a11" }, { "title": "IBM: Security Bulletin: Vulnerabities in SSL in IBM DataPower Gateway", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=5fc1433ca504461e3bbb1d30e408592c" }, { "title": "Hitachi Security Advisories: Vulnerability in Cosminexus HTTP Server", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2019-112" }, { "title": "Hitachi Security Advisories: Vulnerability in JP1", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2019-132" }, { "title": "IBM: IBM Security Bulletin: Security vulnerabilities identified in OpenSSL affect Rational Build Forge (CVE-2018-0734, CVE-2018-5407 and CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=e59d7f075c856823d6f7370dea35e662" }, { "title": "Debian CVElist Bug Report Logs: mysql-5.7: Security fixes from the April 2019 CPU", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=5f1bd0287d0770973261ab8500c6982b" }, { "title": "IBM: IBM Security Bulletin: Vulnerability in Node.js affects IBM Integration Bus \u0026 IBM App Connect Enterprise V11", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1a7cb34592ef045ece1d2b32c150f2a2" }, { "title": "IBM: IBM Security Bulletin: Secure Gateway is affected by multiple vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=28830011b173eee360fbb2a55c68c9d3" }, { "title": "IBM: IBM Security Bulletin: Multiple vulnerabilities affect IBM\u00ae SDK for Node.js\u2122 in IBM Cloud", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=8db7a9036f52f1664d12ac73d7a3506f" }, { "title": "IBM: IBM Security Bulletin: Security vulnerabilities in IBM SDK for Node.js might affect the configuration editor used by IBM Business Automation Workflow and IBM Business Process Manager (BPM)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=6b74f45222d8029af7ffef49314f6056" }, { "title": "Oracle Solaris Third Party Bulletins: Oracle Solaris Third Party Bulletin - April 2019", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=oracle_solaris_third_party_bulletins\u0026qid=4ee609eeae78bbbd0d0c827f33a7f87f" }, { "title": "Tenable Security Advisories: [R1] Nessus Agent 7.4.0 Fixes One Third-party Vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2019-03" }, { "title": "Forcepoint Security Advisories: CVE-2018-0734 and CVE-2019-1559 (OpenSSL)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=forcepoint_security_advisories\u0026qid=b508c983da563a8786bf80c360afb887" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in JP1/Automatic Job Management System 3 - Web Operation Assistant", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2021-121" }, { "title": "Palo Alto Networks Security Advisory: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=217c2f4028735d91500e325e8ba1cbba" }, { "title": "Palo Alto Networks Security Advisory: CVE-2019-1559 OpenSSL vulnerability CVE-2019-1559 has been resolved in PAN-OS", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=a16107c1f899993837417057168db200" }, { "title": "IBM: IBM Security Bulletin:IBM Security Identity Adapters has released a fix in response to the OpenSSL vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=00b8bc7d11e5484e8721f3f62ec2ce87" }, { "title": "IBM: Security Bulletin: Vulnerabilities have been identified in OpenSSL and the Kernel shipped with the DS8000 Hardware Management Console (HMC)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=423d1da688755122eb2591196e4cc160" }, { "title": "IBM: IBM Security Bulletin: Multiple vulnerabilities affect IBM Watson Assistant for IBM Cloud Pak for Data", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1e6142e07a3e9637110bdfa17e331459" }, { "title": "IBM: IBM Security Bulletin: Multiple Vulnerabilities in Watson Openscale (Liberty, Java, node.js)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=a47e10150b300f15d2fd55b9cdaed12d" }, { "title": "Tenable Security Advisories: [R1] Nessus 8.3.0 Fixes Multiple Third-party Vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=tenable_security_advisories\u0026qid=tns-2019-02" }, { "title": "IBM: IBM Security Bulletin: BigFix Platform 9.5.x / 9.2.x affected by multiple vulnerabilities (CVE-2018-16839, CVE-2018-16842, CVE-2018-16840, CVE-2019-3823, CVE-2019-3822, CVE-2018-16890, CVE-2019-4011, CVE-2018-2005, CVE-2019-4058, CVE-2019-1559)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0b05dc856c1be71db871bcea94f6fa8d" }, { "title": "IBM: IBM Security Bulletin: Multiple Security Vulnerabilities have been addressed in IBM Security Access Manager Appliance", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=800337bc69aa7ad92ac88a2adcc7d426" }, { "title": "IBM: IBM Security Bulletin: Vyatta 5600 vRouter Software Patches \u2013 Releases 1801-w and 1801-y", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=bf3f2299a8658b7cd3984c40e7060666" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2019-1559 " }, { "title": "Centos-6-openssl-1.0.1e-58.pd1trfir", "trust": 0.1, "url": "https://github.com/datourist/centos-6-openssl-1.0.1e-58.pd1trfir " }, { "title": "", "trust": 0.1, "url": "https://github.com/tls-attacker/tls-padding-oracles " }, { "title": "TLS-Padding-Oracles", "trust": 0.1, "url": "https://github.com/rub-nds/tls-padding-oracles " }, { "title": "vyger", "trust": 0.1, "url": "https://github.com/mrodden/vyger " }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "CNNVD", "id": "CNNVD-201902-956" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-203", "trust": 1.1 }, { "problemtype": "information leak (CWE-200) [NVD Evaluation ]", "trust": 0.8 }, { "problemtype": "CWE-200", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-147651" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 3.6, "url": "http://www.securityfocus.com/bid/107174" }, { "trust": 2.5, "url": "https://access.redhat.com/errata/rhsa-2019:3929" }, { "trust": 2.5, "url": "https://access.redhat.com/errata/rhsa-2019:3931" }, { "trust": 2.4, "url": "https://www.oracle.com/security-alerts/cpujan2021.html" }, { "trust": 2.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1559" }, { "trust": 2.0, "url": "https://access.redhat.com/errata/rhsa-2019:2304" }, { "trust": 1.9, "url": "https://www.openssl.org/news/secadv/20190226.txt" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:2437" }, { "trust": 1.9, "url": "https://access.redhat.com/errata/rhsa-2019:2439" }, { "trust": 1.9, "url": "https://usn.ubuntu.com/3899-1/" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20190301-0001/" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20190301-0002/" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20190423-0002/" }, { "trust": 1.8, "url": "https://www.tenable.com/security/tns-2019-02" }, { "trust": 1.8, "url": "https://www.tenable.com/security/tns-2019-03" }, { "trust": 1.8, "url": "https://www.debian.org/security/2019/dsa-4400" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/201903-10" }, { "trust": 1.8, "url": "https://www.oracle.com/security-alerts/cpujan2020.html" }, { "trust": 1.8, "url": "https://www.oracle.com/technetwork/security-advisory/cpuapr2019-5072813.html" }, { "trust": 1.8, "url": "https://www.oracle.com/technetwork/security-advisory/cpujul2019-5072835.html" }, { "trust": 1.8, "url": "https://www.oracle.com/technetwork/security-advisory/cpuoct2019-5072832.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2019/03/msg00003.html" }, { "trust": 1.8, "url": "https://access.redhat.com/errata/rhsa-2019:2471" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-03/msg00041.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-04/msg00019.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-04/msg00046.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-04/msg00047.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-05/msg00049.html" }, { "trust": 1.8, "url": "http://lists.opensuse.org/opensuse-security-announce/2019-06/msg00080.html" }, { "trust": 1.8, "url": "https://usn.ubuntu.com/4376-2/" }, { "trust": 1.7, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026id=sb10282" }, { "trust": 1.2, "url": "https://support.f5.com/csp/article/k18549143" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ewc42uxl5ghtu5g77vkbf6jyuungshom/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/zbev5qgdrfuzdmnecfxusn5fmyozde4v/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/y3ivfgserazlnjck35tem2r4726xih3z/" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=e9bbefbf0f24c57645e7ad6a5a71ae649d18ac8e" }, { "trust": 1.1, "url": "https://support.f5.com/csp/article/k18549143?utm_source=f5support\u0026amp%3butm_medium=rss" }, { "trust": 0.7, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=e9bbefbf0f24c57645e7ad6a5a71ae649d18ac8e" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/zbev5qgdrfuzdmnecfxusn5fmyozde4v/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/y3ivfgserazlnjck35tem2r4726xih3z/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ewc42uxl5ghtu5g77vkbf6jyuungshom/" }, { "trust": 0.6, "url": "http://aix.software.ibm.com/aix/efixes/security/openssl_advisory30.asc" }, { "trust": 0.6, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44019/?l=en_us\u0026atype=sa\u0026fs=search\u0026pn=1\u0026atype=sa" }, { "trust": 0.6, "url": "https://www.oracle.com/technetwork/topics/security/bulletinapr2019-5462008.html" }, { "trust": 0.6, "url": "https://github.com/rub-nds/tls-padding-oracles" }, { "trust": 0.6, "url": "http://openssl.org/" }, { "trust": 0.6, "url": "https://support.f5.com/csp/article/k18549143?utm_source=f5support\u0026utm_medium=rss" }, { "trust": 0.6, "url": "https://support.symantec.com/us/en/article.symsa1490.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170328" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170340" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170334" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170322" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170352" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1170346" }, { "trust": 0.6, "url": "https://nodejs.org/en/blog/vulnerability/february-2019-security-releases/" }, { "trust": 0.6, "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20190572-1/" }, { "trust": 0.6, "url": "https://usn.ubuntu.com/4212-1/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115655" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115649" }, { "trust": 0.6, "url": "https://www.hitachi.co.jp/prod/comp/soft1/global/security/info/vuls/ hitachi-sec-2019-132/index.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/2016771" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/2020677" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/2027745" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1126581" }, { "trust": 0.6, "url": "http://www.hitachi.co.jp/prod/comp/soft1/global/security/info/vuls/hitachi-sec-2019-132/index.html" }, { "trust": 0.6, "url": "http://www.ubuntu.com/usn/usn-3899-1" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/76438" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-openssl-affect-ibm-tivoli-netcool-system-service-monitors-application-service-monitors-cve-2018-5407cve-2020-1967cve-2018-0734cve-2019-1563cve-2019/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4405/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1116357" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4558/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4479/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3729/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/76230" }, { "trust": 0.6, "url": "https://www.oracle.com/security-alerts/cpujan2020verbose.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0032/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0487/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1115643" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/openssl-1-0-2-information-disclosure-via-0-byte-record-padding-oracle-28600" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/3517185" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1167202" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-as-used-by-ibm-qradar-siem-is-missing-a-required-cryptographic-step-cve-2019-1559/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0192/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.3390.4/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerability-in-openssl-affects-ibm-integrated-analytics-system/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4479.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3462/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4083" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/155415/red-hat-security-advisory-2019-3929-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6520674" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0696" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-have-been-identified-in-openssl-and-the-kernel-shipped-with-the-ds8000-hardware-management-console-hmc/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/76782" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-rackswitch-firmware-products-are-affected-by-the-following-opensll-vulnerability/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.2383/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.4255/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2019.4297/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0102/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1143442" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-security-vulnerabilities-in-openssh-and-openssl-shipped-with-ibm-security-access-manager-appliance-cve-2018-15473-cve-2019-1559/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1105965" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/158377/ubuntu-security-notice-usn-4376-2.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/1106553" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-public-disclosed-vulnerability-from-openssl-affect-ibm-netezza-host-management/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/151886/slackware-security-advisory-openssl-updates.html" }, { "trust": 0.5, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-1559" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-16881" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16881" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-10072" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-0221" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-5407" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5407" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-0221" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-10072" }, { "trust": 0.1, "url": "https://kc.mcafee.com/corporate/index?page=content\u0026amp;id=sb10282" }, { "trust": 0.1, "url": "https://support.f5.com/csp/article/k18549143?utm_source=f5support\u0026amp;amp;utm_medium=rss" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/203.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2019-1559" }, { "trust": 0.1, "url": "https://tools.cisco.com/security/center/viewalert.x?alertid=59697" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-3888" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3888" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1547" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1563" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4376-1" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4376-2" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/usn/usn-3899-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.0.2g-1ubuntu4.15" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl1.0/1.0.2n-1ubuntu6.2" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl1.0/1.0.2n-1ubuntu5.3" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_jboss_web_server/5.2/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-0734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-0734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-10160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-0161" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-16838" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-10160" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16838" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-0161" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-10139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-10139" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" } ], "sources": [ { "db": "VULHUB", "id": "VHN-147651" }, { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "PACKETSTORM", "id": "154009" }, { "db": "PACKETSTORM", "id": "158377" }, { "db": "PACKETSTORM", "id": "155413" }, { "db": "PACKETSTORM", "id": "151885" }, { "db": "PACKETSTORM", "id": "155415" }, { "db": "PACKETSTORM", "id": "153932" }, { "db": "PACKETSTORM", "id": "154008" }, { "db": "PACKETSTORM", "id": "169635" }, { "db": "CNNVD", "id": "CNNVD-201902-956" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-147651" }, { "db": "VULMON", "id": "CVE-2019-1559" }, { "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "db": "PACKETSTORM", "id": "154009" }, { "db": "PACKETSTORM", "id": "158377" }, { "db": "PACKETSTORM", "id": "155413" }, { "db": "PACKETSTORM", "id": "151885" }, { "db": "PACKETSTORM", "id": "155415" }, { "db": "PACKETSTORM", "id": "153932" }, { "db": "PACKETSTORM", "id": "154008" }, { "db": "PACKETSTORM", "id": "169635" }, { "db": "CNNVD", "id": "CNNVD-201902-956" }, { "db": "NVD", "id": "CVE-2019-1559" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-02-27T00:00:00", "db": "VULHUB", "id": "VHN-147651" }, { "date": "2019-02-27T00:00:00", "db": "VULMON", "id": "CVE-2019-1559" }, { "date": "2019-04-02T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "date": "2019-08-12T17:13:13", "db": "PACKETSTORM", "id": "154009" }, { "date": "2020-07-09T18:42:27", "db": "PACKETSTORM", "id": "158377" }, { "date": "2019-11-20T20:32:22", "db": "PACKETSTORM", "id": "155413" }, { "date": "2019-02-27T19:19:00", "db": "PACKETSTORM", "id": "151885" }, { "date": "2019-11-20T20:44:44", "db": "PACKETSTORM", "id": "155415" }, { "date": "2019-08-06T21:09:19", "db": "PACKETSTORM", "id": "153932" }, { "date": "2019-08-12T17:13:02", "db": "PACKETSTORM", "id": "154008" }, { "date": "2019-02-26T12:12:12", "db": "PACKETSTORM", "id": "169635" }, { "date": "2019-02-26T00:00:00", "db": "CNNVD", "id": "CNNVD-201902-956" }, { "date": "2019-02-27T23:29:00.277000", "db": "NVD", "id": "CVE-2019-1559" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-19T00:00:00", "db": "VULHUB", "id": "VHN-147651" }, { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2019-1559" }, { "date": "2021-07-15T06:04:00", "db": "JVNDB", "id": "JVNDB-2019-002098" }, { "date": "2022-03-25T00:00:00", "db": "CNNVD", "id": "CNNVD-201902-956" }, { "date": "2023-11-07T03:08:30.953000", "db": "NVD", "id": "CVE-2019-1559" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "151885" }, { "db": "PACKETSTORM", "id": "169635" }, { "db": "CNNVD", "id": "CNNVD-201902-956" } ], "trust": 0.8 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 Information Disclosure Vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-002098" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "information disclosure", "sources": [ { "db": "CNNVD", "id": "CNNVD-201902-956" } ], "trust": 0.6 } }
var-202206-1428
Vulnerability from variot
In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
This advisory contains bug fixes and enhancements to the Submariner container images. Description:
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index
All users of Red Hat Ceph Storage are advised to pull these new images from the Red Hat Ecosystem catalog, which provides numerous enhancements and bug fixes. Bugs fixed (https://bugzilla.redhat.com/):
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release
- Summary:
OpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
OADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig OADP-154 - Ensure support for backing up resources based on different label selectors OADP-194 - Remove the registry dependency from OADP OADP-199 - Enable support for restore of existing resources OADP-224 - Restore silently ignore resources if they exist - restore log not updated OADP-225 - Restore doesn't update velero.io/backup-name when a resource is updated OADP-234 - Implementation of incremental restore OADP-324 - Add label to Expired backups failing garbage collection OADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases OADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it's unable to find the zone OADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete OADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot OADP-528 - The volumesnapshotcontent is not removed for the synced backup OADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10 OADP-538 - typo on noDefaultBackupLocation error on DPA CR OADP-552 - Validate OADP with 4.11 and Pod Security Admissions OADP-558 - Empty Failed Backup CRs can't be removed OADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version OADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly OADP-592 - OADP must-gather add support for insecure tls OADP-597 - BSL validation logs OADP-598 - Data mover performance on backup blocks backup process OADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl OADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled OADP-602 - Support GCP for openshift-velero-plugin registry OADP-605 - [OCP 4.11] CSI restore fails with admission webhook \"volumesnapshotclasses.snapshot.storage.k8s.io\" denied OADP-607 - DataMover: VSB is stuck on SnapshotBackupDone OADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace OADP-613 - DataMover: upstream documentation refers wrong CRs OADP-637 - Restic backup fails with CA certificate OADP-643 - [Data Mover] VSB and VSR names are not unique OADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable OADP-648 - Remove default limits for velero and restic pods OADP-652 - Data mover VolSync pod errors with Noobaa OADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace OADP-660 - Data mover restic secret does not support Azure OADP-698 - DataMover: volume-snapshot-mover pod points to upstream image OADP-715 - Restic restore fails: restic-wait container continuously fails with "Not found: /restores//.velero/" OADP-716 - Incremental restore: second restore of a namespace partially fails OADP-736 - Data mover VSB always fails with volsync 0.5
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
- moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)
-
OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)
-
subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)
-
Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)
-
Submariner addon status doesn't track all deployment failures (BZ# 2090311)
-
Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)
-
After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)
-
Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)
-
Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)
-
Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)
-
managed cluster is in "unknown" state for 120 mins after OADP restore
-
RHACM 2.5.2 images (BZ# 2104553)
-
Subscription UI does not allow binding to label with empty value (BZ# 2104961)
-
Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)
-
Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)
-
cluster uninstall log points to incorrect container name (BZ# 2107359)
-
ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)
-
Single node checkbox not visible for 4.11 images (BZ# 2109134)
-
Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)
-
Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)
-
After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)
-
pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)
-
ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)
-
Bugs fixed (https://bugzilla.redhat.com/):
2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Bugs fixed (https://bugzilla.redhat.com/):
2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM 2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR 2091856 - ?Edit BootSource? action should have more explicit information when disabled 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2102694 - Fedora version in DataImportCrons is not 'latest' 2109407 - [4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted 2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls 2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based 2115371 - Unable to start windows VMs on PSI setups 2119613 - GiB changes to B in Template's Edit boot source reference modal 2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass 2128872 - [4.11]Can't restore cloned VM 2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 2129013 - Mark Windows 11 as TechPreview 2129235 - [RFE] Add "Copy SSH command" to VM action list 2134668 - Cannot edit ssh even vm is stopped 2139453 - 4.11.1 rpms
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays 2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster LOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch LOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn't support multiple CAs LOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. LOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. LOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value LOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed LOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue LOG-3310 - [release-5.5] Can't choose correct CA ConfigMap Key when creating lokistack in Console LOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update Advisory ID: RHSA-2022:8917-01 Product: Red Hat JBoss Web Server Advisory URL: https://access.redhat.com/errata/RHSA-2022:8917 Issue date: 2022-12-12 CVE Names: CVE-2022-1292 CVE-2022-2068 ==================================================================== 1. Summary:
An update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat Enterprise Linux versions 7, 8, and 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 8 - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 9 - x86_64
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library.
This release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for Red Hat JBoss Web Server 5.7.0. This release includes bug fixes, enhancements and component upgrades, which are documented in the Release Notes, linked to in the References.
Security Fix(es):
-
openssl: c_rehash script allows command injection (CVE-2022-1292)
-
openssl: the c_rehash script allows command injection (CVE-2022-2068)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 8:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 9:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi dFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2 tvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0 myZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup XIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J eLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG /yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF cNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5 OMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g Ly0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi 0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd vabHaw7IH20=YAuF -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202206-1428", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sannav", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "fas 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "aff a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h615c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "snapmanager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1p" }, { "model": "bootstrap os", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h610c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "3.0.4" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2zf" }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "aff 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "aff 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "fas 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "h610s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "fas a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0.4", "versionStartIncluding": "3.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1p", "versionStartIncluding": "1.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2zf", "versionStartIncluding": "1.0.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_8300_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_8300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_8700_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_8700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_a400_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_a400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_8300_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_8300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_8700_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_8700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:broadcom:sannav:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" } ], "trust": 0.9 }, "cve": "CVE-2022-2068", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "id": "CVE-2022-2068", "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "HIGH", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-2068", "trust": 1.0, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202206-2112", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULMON", "id": "CVE-2022-2068", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nThis advisory contains bug fixes and enhancements to the Submariner\ncontainer images. Description:\n\nRed Hat Ceph Storage is a scalable, open, software-defined storage platform\nthat combines the most stable version of the Ceph storage system with a\nCeph management platform, deployment utilities, and support services. \n\nSpace precludes documenting all of these changes in this advisory. Users\nare directed to the Red Hat Ceph Storage Release Notes for information on\nthe most significant of these changes:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index\n\nAll users of Red Hat Ceph Storage are advised to pull these new images from\nthe Red Hat Ecosystem catalog, which provides numerous enhancements and bug\nfixes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig\nOADP-154 - Ensure support for backing up resources based on different label selectors\nOADP-194 - Remove the registry dependency from OADP\nOADP-199 - Enable support for restore of existing resources\nOADP-224 - Restore silently ignore resources if they exist - restore log not updated\nOADP-225 - Restore doesn\u0027t update velero.io/backup-name when a resource is updated\nOADP-234 - Implementation of incremental restore\nOADP-324 - Add label to Expired backups failing garbage collection\nOADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases\nOADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it\u0027s unable to find the zone\nOADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete\nOADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot\nOADP-528 - The volumesnapshotcontent is not removed for the synced backup\nOADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10\nOADP-538 - typo on noDefaultBackupLocation error on DPA CR\nOADP-552 - Validate OADP with 4.11 and Pod Security Admissions\nOADP-558 - Empty Failed Backup CRs can\u0027t be removed\nOADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version\nOADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly\nOADP-592 - OADP must-gather add support for insecure tls\nOADP-597 - BSL validation logs\nOADP-598 - Data mover performance on backup blocks backup process\nOADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl\nOADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled\nOADP-602 - Support GCP for openshift-velero-plugin registry\nOADP-605 - [OCP 4.11] CSI restore fails with admission webhook \\\"volumesnapshotclasses.snapshot.storage.k8s.io\\\" denied\nOADP-607 - DataMover: VSB is stuck on SnapshotBackupDone\nOADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace\nOADP-613 - DataMover: upstream documentation refers wrong CRs\nOADP-637 - Restic backup fails with CA certificate\nOADP-643 - [Data Mover] VSB and VSR names are not unique\nOADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable\nOADP-648 - Remove default limits for velero and restic pods\nOADP-652 - Data mover VolSync pod errors with Noobaa\nOADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace\nOADP-660 - Data mover restic secret does not support Azure\nOADP-698 - DataMover: volume-snapshot-mover pod points to upstream image\nOADP-715 - Restic restore fails: restic-wait container continuously fails with \"Not found: /restores/\u003cpod-volume\u003e/.velero/\u003crestore-UID\u003e\"\nOADP-716 - Incremental restore: second restore of a namespace partially fails\nOADP-736 - Data mover VSB always fails with volsync 0.5\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2102694 - Fedora version in DataImportCrons is not \u0027latest\u0027\n2109407 - [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based\n2115371 - Unable to start windows VMs on PSI setups\n2119613 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129235 - [RFE] Add \"Copy SSH command\" to VM action list\n2134668 - Cannot edit ssh even vm is stopped\n2139453 - 4.11.1 rpms\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster\nLOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch\nLOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn\u0027t support multiple CAs\nLOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. \nLOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value\nLOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed\nLOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue\nLOG-3310 - [release-5.5] Can\u0027t choose correct CA ConfigMap Key when creating lokistack in Console\nLOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update\nAdvisory ID: RHSA-2022:8917-01\nProduct: Red Hat JBoss Web Server\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8917\nIssue date: 2022-12-12\nCVE Names: CVE-2022-1292 CVE-2022-2068\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat\nEnterprise Linux versions 7, 8, and 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 8 - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 9 - x86_64\n\n3. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. \n\nThis release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for\nRed Hat JBoss Web Server 5.7.0. This release includes bug fixes,\nenhancements and component upgrades, which are documented in the Release\nNotes, linked to in the References. \n\nSecurity Fix(es):\n\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 8:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 9:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi\ndFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2\ntvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0\nmyZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup\nXIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J\neLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG\n/yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF\ncNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5\nOMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g\nLy0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi\n0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd\nvabHaw7IH20=YAuF\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2022-2068" }, { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" } ], "trust": 1.8 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-2068", "trust": 2.6 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-319-01", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168022", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168351", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168378", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170197", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "167713", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168204", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "167948", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168284", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168538", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168112", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168222", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168182", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "167564", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168187", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168387", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "169443", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.1430", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3269", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3109", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5961", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3355", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6290", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4296", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4122", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4568", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4099", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4747", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3145", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4167", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4233", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4669", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6434", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4323", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3034", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3977", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3814", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4525", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4601", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5247", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070615", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070209", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022062906", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070434", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071151", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070712", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202206-2112", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-2068", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168265", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168228", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168289", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170083", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170162", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "id": "VAR-202206-1428", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.416330645 }, "last_update_date": "2024-07-23T19:47:22.503000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "OpenSSL Fixes for operating system command injection vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=197983" }, { "title": "Debian Security Advisories: DSA-5169-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350" }, { "title": "Amazon Linux AMI: ALAS-2022-1626", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2022-1626" }, { "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-2068" }, { "title": "Amazon Linux 2: ALAS2-2022-1832", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1832" }, { "title": "Amazon Linux 2: ALAS2-2022-1831", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1831" }, { "title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alasopenssl-snapsafe-2023-001" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-2068" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228917 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228913 - security advisory" }, { "title": "Red Hat: Moderate: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225818 - security advisory" }, { "title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235982 - security advisory" }, { "title": "Red Hat: Moderate: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226224 - security advisory" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226517 - security advisory" }, { "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226184 - security advisory" }, { "title": "Red Hat: Important: Satellite 6.11.5.6 async security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235980 - security advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-123", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-123" }, { "title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235979 - security advisory" }, { "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226422 - security advisory" }, { "title": "Brocade Security Advisories: Access Denied", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226103 - security advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-195", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-195" }, { "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226188 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226182 - security advisory" }, { "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226051 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226283 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226183 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226507 - security advisory" }, { "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227055 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227058 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228840 - security advisory" }, { "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226024 - security advisory" }, { "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226714 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226290 - security advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226348 - security advisory" }, { "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226345 - security advisory" }, { "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228841 - security advisory" }, { "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226346 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226430 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226370 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226271 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226696 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228750 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230408 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228889 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228781 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "Smart Check Scan-Report", "trust": 0.1, "url": "https://github.com/mawinkler/c1-cs-scan-result " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/vulnerability_checker " }, { "title": "https://github.com/jntass/TASSL-1.1.1", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1 " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/cve_checks " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-78", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://www.debian.org/security/2022/dsa-5169" }, { "trust": 1.7, "url": "https://www.openssl.org/news/secadv/20220621.txt" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20220707-0008/" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.8, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=9639817dac8bbbaa64d09efad7464ccc405527c7" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2c9c35870601b4a44d86ddbf512b38df38285cfa" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4747" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3977" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4669" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170197/red-hat-security-advisory-2022-8917-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3814" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168538/red-hat-security-advisory-2022-6696-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167948/red-hat-security-advisory-2022-5818-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168222/red-hat-security-advisory-2022-6283-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022062906" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168182/red-hat-security-advisory-2022-6184-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6290" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168204/red-hat-security-advisory-2022-6224-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4099" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4296" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4233" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6434" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3145" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070209" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5247" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5961" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3269" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167713/ubuntu-security-notice-usn-5488-2.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3109" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-2068/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168112/red-hat-security-advisory-2022-6051-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071151" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168187/red-hat-security-advisory-2022-6188-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.1430" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-319-01" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168351/red-hat-security-advisory-2022-6430-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4167" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167564/ubuntu-security-notice-usn-5488-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3034" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070615" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168022/red-hat-security-advisory-2022-6024-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4122" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4323" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3355" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070434" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4525" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070712" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4568" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168387/red-hat-security-advisory-2022-6517-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4601" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32148" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30630" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1705" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27404" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-34903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3515" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-37434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27406" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35525" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27405" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/78.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://github.com/backloop-biz/vulnerability_checker" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2022-1626.html" }, { "trust": 0.1, "url": "https://submariner.io/getting-started/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30635" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28131" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30633" }, { "trust": 0.1, "url": "https://submariner.io/." }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30632" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/submariner#submariner-deploy-console" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/1548993" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2789521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6024" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6430" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6290" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6507" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#critical" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36067" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6182" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0308" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30699" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0256" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25310" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2015-20107" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40674" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24795" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38178" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25308" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28390" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27950" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3640" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0854" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-20368" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2586" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25255" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41715" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28893" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2879" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2078" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0617" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21626" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-39399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42003" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26373" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1355" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1048" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0924" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2880" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0908" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1184" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21499" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42004" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27664" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8917" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-06-21T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2022-09-07T16:37:33", "db": "PACKETSTORM", "id": "168265" }, { "date": "2022-08-10T15:50:41", "db": "PACKETSTORM", "id": "168022" }, { "date": "2022-09-13T15:41:58", "db": "PACKETSTORM", "id": "168351" }, { "date": "2022-09-01T16:34:06", "db": "PACKETSTORM", "id": "168228" }, { "date": "2022-09-14T15:08:07", "db": "PACKETSTORM", "id": "168378" }, { "date": "2022-09-07T17:09:04", "db": "PACKETSTORM", "id": "168289" }, { "date": "2022-12-02T15:57:08", "db": "PACKETSTORM", "id": "170083" }, { "date": "2022-12-08T16:34:22", "db": "PACKETSTORM", "id": "170162" }, { "date": "2022-12-12T23:02:33", "db": "PACKETSTORM", "id": "170197" }, { "date": "2022-06-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "date": "2022-06-21T15:15:09.060000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2023-03-09T00:00:00", "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "date": "2023-11-07T03:46:11.177000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL Operating system command injection vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "operating system commend injection", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 } }
var-202203-1690
Vulnerability from variot
zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. ========================================================================== Ubuntu Security Notice USN-5359-2 June 13, 2022
rsync vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
Summary:
rsync could be made to crash or run programs if it received specially crafted network traffic.
Software Description: - rsync: fast, versatile, remote (and local) file-copying tool
Details:
USN-5359-1 fixed vulnerabilities in rsync.
Original advisory details:
Danilo Ramos discovered that rsync incorrectly handled memory when performing certain zlib deflating operations. An attacker could use this issue to cause rsync to crash, resulting in a denial of service, or possibly execute arbitrary code.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: rsync 3.1.1-3ubuntu1.3+esm1
In general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):
2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2031958 - CVE-2021-43797 netty: control chars in header names may lead to HTTP request smuggling 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2334 - [release-5.3] Events listing out of order in Kibana 6.8.1 LOG-2450 - http.max_header_size set to 128kb causes communication with elasticsearch to stop working LOG-2481 - EO shouldn't grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.3]
- This update provides security fixes, bug fixes, and updates container images. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.4 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
Vm2: vulnerable to Sandbox Bypass (CVE-2021-23555)
-
Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
Follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
-
Node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
Follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
Urijs: Authorization Bypass Through User-Controlled Key (CVE-2022-0613)
-
Nconf: Prototype pollution in memory store (CVE-2022-21803)
-
Nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Urijs: Leading white space bypasses protocol validation (CVE-2022-24723)
-
Node-forge: Signature verification leniency in checking
digestAlgorithm
structure can lead to signature forgery (CVE-2022-24771) -
Node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
Node-forge: Signature verification leniency in checking
DigestInfo
structure (CVE-2022-24773) -
Cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-1365)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
Bug fixes:
-
Failed ClusterDeployment validation errors do not surface through the ClusterPool UI (Bugzilla #1995380)
-
Agents wrong validation failure on failing to fetch image needed for installation (Bugzilla #2008583)
-
Fix catalogsource name (Bugzilla #2038250)
-
When the ocp console operator is disable on the managed cluster, the cluster claims failed to update (Bugzilla #2057761)
-
Multicluster-operators-hub-subscription OOMKilled (Bugzilla #2053308)
-
RHACM 2.4.1 Console becomes unstable and refuses login after one hour (Bugzilla #2061958)
-
RHACM 2.4.4 images (Bugzilla #2077548)
-
Bugs fixed (https://bugzilla.redhat.com/):
1995380 - failed ClusterDeployment validation errors do not surface through the ClusterPool UI
2008583 - Agents wrong validation failure on failing to fetch image needed for installation
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2038250 - Fix catalogsource name
2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053308 - multicluster-operators-hub-subscription OOMKilled
2054114 - CVE-2021-23555 vm2: vulnerable to Sandbox Bypass
2055496 - CVE-2022-0613 urijs: Authorization Bypass Through User-Controlled Key
2057761 - When the ocp console operator is disable on the managed cluster, the cluster claims failed to update
2058295 - ACM doesn't accept secret type opaque for cluster api certificate
2061958 - RHACM 2.4.1 Console becomes unstable and refuses login after one hour
2062370 - CVE-2022-24723 urijs: Leading white space bypasses protocol validation
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm
structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo
structure
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor
2077548 - RHACM 2.4.4 images
- Bugs fixed (https://bugzilla.redhat.com/):
2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled 2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled 2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server
-
8) - noarch
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.7 Release Notes linked from the References section. Description:
Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications.
Security Fix(es):
- argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. Bugs fixed (https://bugzilla.redhat.com/):
2096278 - CVE-2022-31035 argocd: cross-site scripting (XSS) allow a malicious user to inject a javascript link in the UI 2096282 - CVE-2022-31034 argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. 2096283 - CVE-2022-31016 argocd: vulnerable to an uncontrolled memory consumption bug 2096291 - CVE-2022-31036 argocd: vulnerable to a symlink following bug allowing a malicious user with repository write access
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: zlib security update Advisory ID: RHSA-2022:2213-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:2213 Issue date: 2022-05-11 CVE Names: CVE-2018-25032 ==================================================================== 1. Summary:
An update for zlib is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The zlib packages provide a general-purpose lossless data compression library that is used by many different programs.
Security Fix(es):
- zlib: A flaw found in zlib when compressing (not decompressing) certain inputs (CVE-2018-25032)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
ppc64: zlib-1.2.7-20.el7_9.ppc.rpm zlib-1.2.7-20.el7_9.ppc64.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm zlib-devel-1.2.7-20.el7_9.ppc.rpm zlib-devel-1.2.7-20.el7_9.ppc64.rpm
ppc64le: zlib-1.2.7-20.el7_9.ppc64le.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm zlib-devel-1.2.7-20.el7_9.ppc64le.rpm
s390x: zlib-1.2.7-20.el7_9.s390.rpm zlib-1.2.7-20.el7_9.s390x.rpm zlib-debuginfo-1.2.7-20.el7_9.s390.rpm zlib-debuginfo-1.2.7-20.el7_9.s390x.rpm zlib-devel-1.2.7-20.el7_9.s390.rpm zlib-devel-1.2.7-20.el7_9.s390x.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: minizip-1.2.7-20.el7_9.ppc.rpm minizip-1.2.7-20.el7_9.ppc64.rpm minizip-devel-1.2.7-20.el7_9.ppc.rpm minizip-devel-1.2.7-20.el7_9.ppc64.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm zlib-static-1.2.7-20.el7_9.ppc.rpm zlib-static-1.2.7-20.el7_9.ppc64.rpm
ppc64le: minizip-1.2.7-20.el7_9.ppc64le.rpm minizip-devel-1.2.7-20.el7_9.ppc64le.rpm zlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm zlib-static-1.2.7-20.el7_9.ppc64le.rpm
s390x: minizip-1.2.7-20.el7_9.s390.rpm minizip-1.2.7-20.el7_9.s390x.rpm minizip-devel-1.2.7-20.el7_9.s390.rpm minizip-devel-1.2.7-20.el7_9.s390x.rpm zlib-debuginfo-1.2.7-20.el7_9.s390.rpm zlib-debuginfo-1.2.7-20.el7_9.s390x.rpm zlib-static-1.2.7-20.el7_9.s390.rpm zlib-static-1.2.7-20.el7_9.s390x.rpm
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: zlib-1.2.7-20.el7_9.src.rpm
x86_64: zlib-1.2.7-20.el7_9.i686.rpm zlib-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-devel-1.2.7-20.el7_9.i686.rpm zlib-devel-1.2.7-20.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: minizip-1.2.7-20.el7_9.i686.rpm minizip-1.2.7-20.el7_9.x86_64.rpm minizip-devel-1.2.7-20.el7_9.i686.rpm minizip-devel-1.2.7-20.el7_9.x86_64.rpm zlib-debuginfo-1.2.7-20.el7_9.i686.rpm zlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm zlib-static-1.2.7-20.el7_9.i686.rpm zlib-static-1.2.7-20.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYnw1+9zjgjWX9erEAQhePQ//UtM5hhHSzE0ZKC4Z9/u34cRNcqIc5nmT opYgZo/hPWp5kkh0R9/tAMWAEa7olBzfzsxulOkm2I65R6k/+fLKaXeQOcwMAkSH gyKBU2TG3+ziT1BrsXBDWAse9mqU+zX7t9rDUZ8u9g30qr/9xrDtrVb0b4Sypslf K5CEMHoskqCnHdl2j+vPOyOCwq8KxLMPBAYtY/X51JwLtT8thvmCQrPWANvWjoSq nDhdVsWpBtPNnsgBqg8Jv+9YhEHJTaa3wVPVorzgP2Bo4W8gmiiukSK9Sv3zcCTu lJnSolqBBU7NmGdQooPrUlUoqJUKXfFXgu+mjybTym8Fdoe0lnxLFSvoEeAr9Swo XlFeBrOR8F5SO16tYKCAtyhafmJn+8MisTPN0NmUD7VLAJ0FzhEk48dlLl5+EoAy AlxiuqgKh+O1zFRN80RSvYkPjWKU6KyK8QJaSKdroGcMjNkjhZ3cM6bpVP6V75F3 CcLZWlP5d18qgfL/SRZo8NG23h+Fzz6FWNSQQZse27NS3BZsM4PVsHF5oaRN3Vij AFwDmIhHL7pE8pZaWck7qevt3i/hwzwYWV5VYYRgkYQIvveE0WUM/kqm+wqlU50Y bbpALcI5h9b83JgteVQG0hf9h5avYzgGrfbj+FOEVPPN86K37ILDvT45VcSjf1vO 4nrrtbUzAhY=Pgu3 -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202203-1690", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.7.5" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "13.46" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "17.32" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "management services for element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.9.2" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "11.54" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.9.13" }, { "model": "scalance sc632-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.5.17" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "scalance sc646-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.8.0" }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.3.36" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "6.45" }, { "model": "gotoassist", "scope": "lt", "trust": 1.0, "vendor": "goto", "version": "11.9.18" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.4.0" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.5.0" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.7.0" }, { "model": "scalance sc626-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.7.14" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "scalance sc636-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.0" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.6.6" }, { "model": "e-series santricity os controller", "scope": "lte", "trust": 1.0, "vendor": "netapp", "version": "11.70.2" }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.8.0" }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.8.4" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mac os x", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "10.15" }, { "model": "mac os x", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "10.15.7" }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.8.14" }, { "model": "zlib", "scope": "lt", "trust": 1.0, "vendor": "zlib", "version": "1.2.12" }, { "model": "scalance sc622-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0.0" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.3.0" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.6.0" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "15.38" }, { "model": "scalance sc642-2c", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "python", "scope": "lt", "trust": 1.0, "vendor": "python", "version": "3.10.5" }, { "model": "mac os x", "scope": "eq", "trust": 1.0, "vendor": "apple", "version": "10.15.7" }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.6.9" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "7.52" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.4" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.9.0" }, { "model": "zulu", "scope": "eq", "trust": 1.0, "vendor": "azul", "version": "8.60" }, { "model": "mariadb", "scope": "gte", "trust": 1.0, "vendor": "mariadb", "version": "10.7.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.10.0" }, { "model": "mariadb", "scope": "lt", "trust": 1.0, "vendor": "mariadb", "version": "10.4.26" }, { "model": "e-series santricity os controller", "scope": "gte", "trust": 1.0, "vendor": "netapp", "version": "11.0.0" }, { "model": "python", "scope": "gte", "trust": 1.0, "vendor": "python", "version": "3.9.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2018-25032" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:zlib:zlib:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.2.12", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.15.7", "versionStartIncluding": "10.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-005:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-007:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2020:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-002:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-003:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-006:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-008:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2021-007:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-002:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-001:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.6.6", "versionStartIncluding": "11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:mac_os_x:10.15.7:security_update_2022-003:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.4", "versionStartIncluding": "12.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.10.5", "versionStartIncluding": "3.10.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.9.13", "versionStartIncluding": "3.9.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.7.14", "versionStartIncluding": "3.7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.8.14", "versionStartIncluding": "3.8.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.3.36", "versionStartIncluding": "10.3.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.4.26", "versionStartIncluding": "10.4.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.5.17", "versionStartIncluding": "10.5.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.6.9", "versionStartIncluding": "10.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.7.5", "versionStartIncluding": "10.7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.8.4", "versionStartIncluding": "10.8.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:mariadb:mariadb:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "10.9.2", "versionStartIncluding": "10.9.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:management_services_for_element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "11.70.2", "versionStartIncluding": "11.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc622-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc622-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc626-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc626-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc632-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc632-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc636-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc636-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc642-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc642-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:scalance_sc646-2c_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:scalance_sc646-2c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:azul:zulu:7.52:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:8.60:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:11.54:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:13.46:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:15.38:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:17.32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:azul:zulu:6.45:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:goto:gotoassist:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.9.18", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2018-25032" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167140" }, { "db": "PACKETSTORM", "id": "167122" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "167225" }, { "db": "PACKETSTORM", "id": "169782" }, { "db": "PACKETSTORM", "id": "167568" }, { "db": "PACKETSTORM", "id": "167133" } ], "trust": 0.9 }, "cve": "CVE-2018-25032", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "VHN-418557", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2018-25032", "trust": 1.0, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-418557", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-418557" }, { "db": "NVD", "id": "CVE-2018-25032" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "zlib before 1.2.12 allows memory corruption when deflating (i.e., when compressing) if the input has many distant matches. ==========================================================================\nUbuntu Security Notice USN-5359-2\nJune 13, 2022\n\nrsync vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n\nSummary:\n\nrsync could be made to crash or run programs if it received\nspecially crafted network traffic. \n\nSoftware Description:\n- rsync: fast, versatile, remote (and local) file-copying tool\n\nDetails:\n\nUSN-5359-1 fixed vulnerabilities in rsync. \n\nOriginal advisory details:\n\n Danilo Ramos discovered that rsync incorrectly handled memory when\n performing certain zlib deflating operations. An attacker could use this\n issue to cause rsync to crash, resulting in a denial of service, or\n possibly execute arbitrary code. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n rsync 3.1.1-3ubuntu1.3+esm1\n\nIn general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2031958 - CVE-2021-43797 netty: control chars in header names may lead to HTTP request smuggling\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2334 - [release-5.3] Events listing out of order in Kibana 6.8.1\nLOG-2450 - http.max_header_size set to 128kb causes communication with elasticsearch to stop working\nLOG-2481 - EO shouldn\u0027t grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.3]\n\n6. This update provides security fixes, bug\nfixes, and updates container images. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* Vm2: vulnerable to Sandbox Bypass (CVE-2021-23555)\n\n* Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* Follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\n* Node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* Follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* Urijs: Authorization Bypass Through User-Controlled Key (CVE-2022-0613)\n\n* Nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* Nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Urijs: Leading white space bypasses protocol validation (CVE-2022-24723)\n\n* Node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* Node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* Node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Cross-fetch: Exposure of Private Personal Information to an Unauthorized\nActor (CVE-2022-1365)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\nBug fixes:\n\n* Failed ClusterDeployment validation errors do not surface through the\nClusterPool UI (Bugzilla #1995380)\n\n* Agents wrong validation failure on failing to fetch image needed for\ninstallation (Bugzilla #2008583)\n\n* Fix catalogsource name (Bugzilla #2038250)\n\n* When the ocp console operator is disable on the managed cluster, the\ncluster claims failed to update (Bugzilla #2057761)\n\n* Multicluster-operators-hub-subscription OOMKilled (Bugzilla #2053308)\n\n* RHACM 2.4.1 Console becomes unstable and refuses login after one hour\n(Bugzilla #2061958)\n\n* RHACM 2.4.4 images (Bugzilla #2077548)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995380 - failed ClusterDeployment validation errors do not surface through the ClusterPool UI\n2008583 - Agents wrong validation failure on failing to fetch image needed for installation\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2038250 - Fix catalogsource name\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053308 - multicluster-operators-hub-subscription OOMKilled\n2054114 - CVE-2021-23555 vm2: vulnerable to Sandbox Bypass\n2055496 - CVE-2022-0613 urijs: Authorization Bypass Through User-Controlled Key\n2057761 - When the ocp console operator is disable on the managed cluster, the cluster claims failed to update\n2058295 - ACM doesn\u0027t accept secret type opaque for cluster api certificate\n2061958 - RHACM 2.4.1 Console becomes unstable and refuses login after one hour\n2062370 - CVE-2022-24723 urijs: Leading white space bypasses protocol validation\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor\n2077548 - RHACM 2.4.4 images\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2081686 - CVE-2022-29165 argocd: ArgoCD will blindly trust JWT claims if anonymous access is enabled\n2081689 - CVE-2022-24905 argocd: Login screen allows message spoofing if SSO is enabled\n2081691 - CVE-2022-24904 argocd: Symlink following allows leaking out-of-bound manifests and JSON files from Argo CD repo-server\n\n5. 8) - noarch\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.7 Release Notes linked from the References section. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. \n\nSecurity Fix(es):\n\n* argocd: vulnerable to a variety of attacks when an SSO login is initiated\nfrom the Argo CD CLI or the UI. Bugs fixed (https://bugzilla.redhat.com/):\n\n2096278 - CVE-2022-31035 argocd: cross-site scripting (XSS) allow a malicious user to inject a javascript link in the UI\n2096282 - CVE-2022-31034 argocd: vulnerable to a variety of attacks when an SSO login is initiated from the Argo CD CLI or the UI. \n2096283 - CVE-2022-31016 argocd: vulnerable to an uncontrolled memory consumption bug\n2096291 - CVE-2022-31036 argocd: vulnerable to a symlink following bug allowing a malicious user with repository write access\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: zlib security update\nAdvisory ID: RHSA-2022:2213-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:2213\nIssue date: 2022-05-11\nCVE Names: CVE-2018-25032\n====================================================================\n1. Summary:\n\nAn update for zlib is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe zlib packages provide a general-purpose lossless data compression\nlibrary that is used by many different programs. \n\nSecurity Fix(es):\n\n* zlib: A flaw found in zlib when compressing (not decompressing) certain\ninputs (CVE-2018-25032)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2067945 - CVE-2018-25032 zlib: A flaw found in zlib when compressing (not decompressing) certain inputs\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nppc64:\nzlib-1.2.7-20.el7_9.ppc.rpm\nzlib-1.2.7-20.el7_9.ppc64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm\nzlib-devel-1.2.7-20.el7_9.ppc.rpm\nzlib-devel-1.2.7-20.el7_9.ppc64.rpm\n\nppc64le:\nzlib-1.2.7-20.el7_9.ppc64le.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm\nzlib-devel-1.2.7-20.el7_9.ppc64le.rpm\n\ns390x:\nzlib-1.2.7-20.el7_9.s390.rpm\nzlib-1.2.7-20.el7_9.s390x.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390x.rpm\nzlib-devel-1.2.7-20.el7_9.s390.rpm\nzlib-devel-1.2.7-20.el7_9.s390x.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nminizip-1.2.7-20.el7_9.ppc.rpm\nminizip-1.2.7-20.el7_9.ppc64.rpm\nminizip-devel-1.2.7-20.el7_9.ppc.rpm\nminizip-devel-1.2.7-20.el7_9.ppc64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64.rpm\nzlib-static-1.2.7-20.el7_9.ppc.rpm\nzlib-static-1.2.7-20.el7_9.ppc64.rpm\n\nppc64le:\nminizip-1.2.7-20.el7_9.ppc64le.rpm\nminizip-devel-1.2.7-20.el7_9.ppc64le.rpm\nzlib-debuginfo-1.2.7-20.el7_9.ppc64le.rpm\nzlib-static-1.2.7-20.el7_9.ppc64le.rpm\n\ns390x:\nminizip-1.2.7-20.el7_9.s390.rpm\nminizip-1.2.7-20.el7_9.s390x.rpm\nminizip-devel-1.2.7-20.el7_9.s390.rpm\nminizip-devel-1.2.7-20.el7_9.s390x.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390.rpm\nzlib-debuginfo-1.2.7-20.el7_9.s390x.rpm\nzlib-static-1.2.7-20.el7_9.s390.rpm\nzlib-static-1.2.7-20.el7_9.s390x.rpm\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nzlib-1.2.7-20.el7_9.src.rpm\n\nx86_64:\nzlib-1.2.7-20.el7_9.i686.rpm\nzlib-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-devel-1.2.7-20.el7_9.i686.rpm\nzlib-devel-1.2.7-20.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nminizip-1.2.7-20.el7_9.i686.rpm\nminizip-1.2.7-20.el7_9.x86_64.rpm\nminizip-devel-1.2.7-20.el7_9.i686.rpm\nminizip-devel-1.2.7-20.el7_9.x86_64.rpm\nzlib-debuginfo-1.2.7-20.el7_9.i686.rpm\nzlib-debuginfo-1.2.7-20.el7_9.x86_64.rpm\nzlib-static-1.2.7-20.el7_9.i686.rpm\nzlib-static-1.2.7-20.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnw1+9zjgjWX9erEAQhePQ//UtM5hhHSzE0ZKC4Z9/u34cRNcqIc5nmT\nopYgZo/hPWp5kkh0R9/tAMWAEa7olBzfzsxulOkm2I65R6k/+fLKaXeQOcwMAkSH\ngyKBU2TG3+ziT1BrsXBDWAse9mqU+zX7t9rDUZ8u9g30qr/9xrDtrVb0b4Sypslf\nK5CEMHoskqCnHdl2j+vPOyOCwq8KxLMPBAYtY/X51JwLtT8thvmCQrPWANvWjoSq\nnDhdVsWpBtPNnsgBqg8Jv+9YhEHJTaa3wVPVorzgP2Bo4W8gmiiukSK9Sv3zcCTu\nlJnSolqBBU7NmGdQooPrUlUoqJUKXfFXgu+mjybTym8Fdoe0lnxLFSvoEeAr9Swo\nXlFeBrOR8F5SO16tYKCAtyhafmJn+8MisTPN0NmUD7VLAJ0FzhEk48dlLl5+EoAy\nAlxiuqgKh+O1zFRN80RSvYkPjWKU6KyK8QJaSKdroGcMjNkjhZ3cM6bpVP6V75F3\nCcLZWlP5d18qgfL/SRZo8NG23h+Fzz6FWNSQQZse27NS3BZsM4PVsHF5oaRN3Vij\nAFwDmIhHL7pE8pZaWck7qevt3i/hwzwYWV5VYYRgkYQIvveE0WUM/kqm+wqlU50Y\nbbpALcI5h9b83JgteVQG0hf9h5avYzgGrfbj+FOEVPPN86K37ILDvT45VcSjf1vO\n4nrrtbUzAhY=Pgu3\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2018-25032" }, { "db": "VULHUB", "id": "VHN-418557" }, { "db": "PACKETSTORM", "id": "167486" }, { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167140" }, { "db": "PACKETSTORM", "id": "167122" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "167225" }, { "db": "PACKETSTORM", "id": "169782" }, { "db": "PACKETSTORM", "id": "167568" }, { "db": "PACKETSTORM", "id": "167133" } ], "trust": 1.89 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-418557", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-418557" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2018-25032", "trust": 2.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/03/28/3", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/03/26/1", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/03/28/1", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/03/24/1", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/03/25/2", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-333517", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "167133", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167381", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167122", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167225", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167140", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "169782", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "166946", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167568", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "166970", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167486", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "166552", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168352", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168042", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166967", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167327", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167391", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167400", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167956", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167088", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167142", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167346", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171157", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169897", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168696", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167008", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167602", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167277", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167330", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167485", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167679", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167334", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167116", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167389", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166563", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166555", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167223", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170003", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167555", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168036", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167224", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167260", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167134", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167364", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167594", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167461", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171152", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167188", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167591", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168011", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167271", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167936", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167138", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167189", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167586", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167186", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167281", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169624", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167470", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167265", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168392", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167119", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167136", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167674", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167622", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167124", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-418557", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-418557" }, { "db": "PACKETSTORM", "id": "167486" }, { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167140" }, { "db": "PACKETSTORM", "id": "167122" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "167225" }, { "db": "PACKETSTORM", "id": "169782" }, { "db": "PACKETSTORM", "id": "167568" }, { "db": "PACKETSTORM", "id": "167133" }, { "db": "NVD", "id": "CVE-2018-25032" } ] }, "id": "VAR-202203-1690", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-418557" } ], "trust": 0.6383838399999999 }, "last_update_date": "2024-07-23T19:43:54.586000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-787", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-418557" }, { "db": "NVD", "id": "CVE-2018-25032" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-333517.pdf" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220729-0004/" }, { "trust": 1.1, "url": "https://github.com/madler/zlib/compare/v1.2.11...v1.2.12" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220526-0009/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213255" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213256" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213257" }, { "trust": 1.1, "url": "https://www.debian.org/security/2022/dsa-5111" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/may/38" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/may/35" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/may/33" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202210-42" }, { "trust": 1.1, "url": "https://github.com/madler/zlib/commit/5c44459c3b28a9bd3283aaceab7c615f8020c531" }, { "trust": 1.1, "url": "https://github.com/madler/zlib/issues/605" }, { "trust": 1.1, "url": "https://www.openwall.com/lists/oss-security/2022/03/24/1" }, { "trust": 1.1, "url": "https://www.openwall.com/lists/oss-security/2022/03/28/1" }, { "trust": 1.1, "url": "https://www.openwall.com/lists/oss-security/2022/03/28/3" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/04/msg00000.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/05/msg00008.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00023.html" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2022/03/25/2" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2022/03/26/1" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-25636" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-4028" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24904" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24905" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24904" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29165" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41617" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4189" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29165" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24905" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43797" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0759" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21426" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21443" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21476" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37137" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21496" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43797" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21496" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21443" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21434" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21426" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37136" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21476" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0759" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24723" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24723" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4115" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0613" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0613" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/voknp2l734ael47nrygvzikefoubqy5y/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jzzptwryqulaol3aw7rzjnvz2uonxcv4/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ns2d2gfpfgojul4wq3duay7hf4vwq77f/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/df62mvmh3qugmbdcb3dy2erq6ebhtadb/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/dczfijbjtz7cl5qxbfktq22q26vinruf/" }, { "trust": 0.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/xokfmsnq5d5wgmalbnbxu3ge442v74wu/" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5359-1" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5359-2" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4671" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:2218" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:2217" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1365" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1365" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23555" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23555" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0711" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1715" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3639" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4690" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3639" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31036" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31034" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31035" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31034" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31035" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31016" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31036" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5152" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:2213" } ], "sources": [ { "db": "VULHUB", "id": "VHN-418557" }, { "db": "PACKETSTORM", "id": "167486" }, { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167140" }, { "db": "PACKETSTORM", "id": "167122" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "167225" }, { "db": "PACKETSTORM", "id": "169782" }, { "db": "PACKETSTORM", "id": "167568" }, { "db": "PACKETSTORM", "id": "167133" }, { "db": "NVD", "id": "CVE-2018-25032" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-418557" }, { "db": "PACKETSTORM", "id": "167486" }, { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167140" }, { "db": "PACKETSTORM", "id": "167122" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "167225" }, { "db": "PACKETSTORM", "id": "169782" }, { "db": "PACKETSTORM", "id": "167568" }, { "db": "PACKETSTORM", "id": "167133" }, { "db": "NVD", "id": "CVE-2018-25032" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-25T00:00:00", "db": "VULHUB", "id": "VHN-418557" }, { "date": "2022-06-19T16:39:51", "db": "PACKETSTORM", "id": "167486" }, { "date": "2022-06-03T15:43:30", "db": "PACKETSTORM", "id": "167381" }, { "date": "2022-05-12T15:53:27", "db": "PACKETSTORM", "id": "167140" }, { "date": "2022-05-12T15:38:35", "db": "PACKETSTORM", "id": "167122" }, { "date": "2022-05-04T05:42:06", "db": "PACKETSTORM", "id": "166946" }, { "date": "2022-05-05T17:33:41", "db": "PACKETSTORM", "id": "166970" }, { "date": "2022-05-19T15:53:12", "db": "PACKETSTORM", "id": "167225" }, { "date": "2022-11-08T13:50:54", "db": "PACKETSTORM", "id": "169782" }, { "date": "2022-06-22T15:07:32", "db": "PACKETSTORM", "id": "167568" }, { "date": "2022-05-12T15:51:01", "db": "PACKETSTORM", "id": "167133" }, { "date": "2022-03-25T09:15:08.187000", "db": "NVD", "id": "CVE-2018-25032" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-11T00:00:00", "db": "VULHUB", "id": "VHN-418557" }, { "date": "2023-11-07T02:56:26.393000", "db": "NVD", "id": "CVE-2018-25032" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Ubuntu Security Notice USN-5359-2", "sources": [ { "db": "PACKETSTORM", "id": "167486" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "spoof", "sources": [ { "db": "PACKETSTORM", "id": "167381" }, { "db": "PACKETSTORM", "id": "167225" } ], "trust": 0.2 } }
var-202101-0119
Vulnerability from variot
The iconv feature in the GNU C Library (aka glibc or libc6) through 2.32, when processing invalid multi-byte input sequences in the EUC-KR encoding, may have a buffer over-read. 8) - aarch64, ppc64le, s390x, x86_64
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.4 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):
1428290 - CVE-2016-10228 glibc: iconv program can hang when invoked with the -c option 1684057 - CVE-2019-9169 glibc: regular-expression match via proceed_next_node in posix/regexec.c leads to heap-based buffer over-read 1704868 - CVE-2016-10228 glibc: iconv: Fix converter hangs and front end option parsing for //TRANSLIT and //IGNORE [rhel-8] 1855790 - glibc: Update Intel CET support from upstream 1856398 - glibc: Build with -moutline-atomics on aarch64 1868106 - glibc: Transaction ID collisions cause slow DNS lookups in getaddrinfo 1871385 - glibc: Improve auditing implementation (including DT_AUDIT, and DT_DEPAUDIT) 1871387 - glibc: Improve IBM POWER9 architecture performance 1871394 - glibc: Fix AVX2 off-by-one error in strncmp (swbz#25933) 1871395 - glibc: Improve IBM Z (s390x) Performance 1871396 - glibc: Improve use of static TLS surplus for optimizations. Bugs fixed (https://bugzilla.redhat.com/):
1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve
- JIRA issues fixed (https://issues.jboss.org/):
TRACING-1725 - Elasticsearch operator reports x509 errors communicating with ElasticSearch in OpenShift Service Mesh project
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.2.4 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.2/html/release_notes/
Security fixes:
-
redisgraph-tls: redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
console-header-container: nodejs-netmask: improper input validation of octal input data (CVE-2021-28092)
-
console-container: nodejs-is-svg: ReDoS via malicious string (CVE-2021-28918)
Bug fixes:
-
RHACM 2.2.4 images (BZ# 1957254)
-
Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 (BZ#1950832)
-
ACM Operator should support using the default route TLS (BZ# 1955270)
-
The scrolling bar for search filter does not work properly (BZ# 1956852)
-
Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)
-
The proxy setup in install-config.yaml is not worked when IPI installing with RHACM (BZ# 1960181)
-
Unable to make SSH connection to a Bitbucket server (BZ# 1966513)
-
Observability Thanos store shard crashing - cannot unmarshall DNS message (BZ# 1967890)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7 1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory 1954506 - [DDF] Table does not contain data about 20 clusters. Now it's difficult to estimate CPU usage with larger clusters 1954535 - Reinstall Submariner - No endpoints found on one cluster 1955270 - ACM Operator should support using the default route TLS 1956852 - The scrolling bar for search filter does not work properly 1957254 - RHACM 2.2.4 images 1959426 - Limits on Length of MultiClusterObservability Resource Name 1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. 1963128 - [DDF] Please rename this to "Amazon Elastic Kubernetes Service" 1966513 - Unable to make SSH connection to a Bitbucket server 1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. 1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the name service cache daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly.
Bug Fix(es):
-
glibc: 64bit_strstr_via_64bit_strstr_sse2_unaligned detection fails with large device and inode numbers (BZ#1883162)
-
glibc: Performance regression in ebizzy benchmark (BZ#1889977)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the glibc library must be restarted, or the system rebooted. Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: glibc-2.17-322.el7_9.src.rpm
x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
Source: glibc-2.17-322.el7_9.src.rpm
x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-static-2.17-322.el7_9.i686.rpm glibc-static-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: glibc-2.17-322.el7_9.src.rpm
ppc64: glibc-2.17-322.el7_9.ppc.rpm glibc-2.17-322.el7_9.ppc64.rpm glibc-common-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-2.17-322.el7_9.ppc.rpm glibc-debuginfo-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm glibc-devel-2.17-322.el7_9.ppc.rpm glibc-devel-2.17-322.el7_9.ppc64.rpm glibc-headers-2.17-322.el7_9.ppc64.rpm glibc-utils-2.17-322.el7_9.ppc64.rpm nscd-2.17-322.el7_9.ppc64.rpm
ppc64le: glibc-2.17-322.el7_9.ppc64le.rpm glibc-common-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm glibc-devel-2.17-322.el7_9.ppc64le.rpm glibc-headers-2.17-322.el7_9.ppc64le.rpm glibc-utils-2.17-322.el7_9.ppc64le.rpm nscd-2.17-322.el7_9.ppc64le.rpm
s390x: glibc-2.17-322.el7_9.s390.rpm glibc-2.17-322.el7_9.s390x.rpm glibc-common-2.17-322.el7_9.s390x.rpm glibc-debuginfo-2.17-322.el7_9.s390.rpm glibc-debuginfo-2.17-322.el7_9.s390x.rpm glibc-debuginfo-common-2.17-322.el7_9.s390.rpm glibc-debuginfo-common-2.17-322.el7_9.s390x.rpm glibc-devel-2.17-322.el7_9.s390.rpm glibc-devel-2.17-322.el7_9.s390x.rpm glibc-headers-2.17-322.el7_9.s390x.rpm glibc-utils-2.17-322.el7_9.s390x.rpm nscd-2.17-322.el7_9.s390x.rpm
x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: glibc-debuginfo-2.17-322.el7_9.ppc.rpm glibc-debuginfo-2.17-322.el7_9.ppc64.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm glibc-static-2.17-322.el7_9.ppc.rpm glibc-static-2.17-322.el7_9.ppc64.rpm
ppc64le: glibc-debuginfo-2.17-322.el7_9.ppc64le.rpm glibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm glibc-static-2.17-322.el7_9.ppc64le.rpm
s390x: glibc-debuginfo-2.17-322.el7_9.s390.rpm glibc-debuginfo-2.17-322.el7_9.s390x.rpm glibc-debuginfo-common-2.17-322.el7_9.s390.rpm glibc-debuginfo-common-2.17-322.el7_9.s390x.rpm glibc-static-2.17-322.el7_9.s390.rpm glibc-static-2.17-322.el7_9.s390x.rpm
x86_64: glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-static-2.17-322.el7_9.i686.rpm glibc-static-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: glibc-2.17-322.el7_9.src.rpm
x86_64: glibc-2.17-322.el7_9.i686.rpm glibc-2.17-322.el7_9.x86_64.rpm glibc-common-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-2.17-322.el7_9.i686.rpm glibc-debuginfo-2.17-322.el7_9.x86_64.rpm glibc-debuginfo-common-2.17-322.el7_9.i686.rpm glibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm glibc-devel-2.17-322.el7_9.i686.rpm glibc-devel-2.17-322.el7_9.x86_64.rpm glibc-headers-2.17-322.el7_9.x86_64.rpm glibc-utils-2.17-322.el7_9.x86_64.rpm nscd-2.17-322.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug does not work for Windows pods
1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created
1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page
1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar
1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI
1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks
1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var
1990625 - Ironic agent registers with SLAAC address with privacy-stable
1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time
1991067 - github.com can not be resolved inside pods where cluster is running on openstack.
1991573 - Enable typescript strictNullCheck on network-policies files
1991641 - Baremetal Cluster Operator still Available After Delete Provisioning
1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator
1991819 - Misspelled word "ocurred" in oc inspect cmd
1991942 - Alignment and spacing fixes
1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked
1992453 - The configMap failed to save on VM environment tab
1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab
1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab
1992509 - Could not customize boot source due to source PVC not found
1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1992580 - storageProfile should stay with the same value by check/uncheck the apply button
1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply
1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios
1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)
1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing
1994094 - Some hardcodes are detected at the code level in OpenShift console components
1994142 - Missing required cloud config fields for IBM Cloud
1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools
1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart
1995335 - [SCALE] ovnkube CNI: remove ovs flows check
1995493 - Add Secret to workload button and Actions button are not aligned on secret details page
1995531 - Create RDO-based Ironic image to be promoted to OKD
1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator
1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
1995924 - CMO should report Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- "
2003096 - [e2e][automation] check bootsource URL is displaying on review step
2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role
2003120 - CI: Uncaught error with ResizeObserver on operand details page
2003145 - Duplicate operand tab titles causes "two children with the same key" warning
2003164 - OLM, fatal error: concurrent map writes
2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form
2003193 - Kubelet/crio leaks netns and veth ports in the host
2003195 - OVN CNI should ensure host veths are removed
2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images
2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI
2003244 - Revert libovsdb client code
2003251 - Patternfly components with list element has list item bullet when they should not.
2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI
2003269 - Rejected pods should be filtered from admission regression
2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release
2003426 - [e2e][automation] add test for vm details bootorder
2003496 - [e2e][automation] add test for vm resources requirment settings
2003641 - All metal ipi jobs are failing in 4.10
2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state
2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node
2003683 - Samples operator is panicking in CI
2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page
2003715 - Error on creating local volume set after selection of the volume mode
2003743 - Remove workaround keeping /boot RW for kdump support
2003775 - etcd pod on CrashLoopBackOff after master replacement procedure
2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver
2003792 - Monitoring metrics query graph flyover panel is useless
2003808 - Add Sprint 207 translations
2003845 - Project admin cannot access image vulnerabilities view
2003859 - sdn emits events with garbage messages
2003896 - (release-4.10) ApiRequestCounts conditional gatherer
2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas
2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes
2004059 - [e2e][automation] fix current tests for downstream
2004060 - Trying to use basic spring boot sample causes crash on Firefox
2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection
2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently
2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver
2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory
2004449 - Boot option recovery menu prevents image boot
2004451 - The backup filename displayed in the RecentBackup message is incorrect
2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts
2004508 - TuneD issues with the recent ConfigParser changes.
2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions
2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs
2004578 - Monitoring and node labels missing for an external storage platform
2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days
2004596 - [4.10] Bootimage bump tracker
2004597 - Duplicate ramdisk log containers running
2004600 - Duplicate ramdisk log containers running
2004609 - output of "crictl inspectp" is not complete
2004625 - BMC credentials could be logged if they change
2004632 - When LE takes a large amount of time, multiple whereabouts are seen
2004721 - ptp/worker custom threshold doesn't change ptp events threshold
2004736 - [knative] Create button on new Broker form is inactive despite form being filled
2004796 - [e2e][automation] add test for vm scheduling policy
2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque
2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card
2004901 - [e2e][automation] improve kubevirt devconsole tests
2004962 - Console frontend job consuming too much CPU in CI
2005014 - state of ODF StorageSystem is misreported during installation or uninstallation
2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines
2005179 - pods status filter is not taking effect
2005182 - sync list of deprecated apis about to be removed
2005282 - Storage cluster name is given as title in StorageSystem details page
2005355 - setuptools 58 makes Kuryr CI fail
2005407 - ClusterNotUpgradeable Alert should be set to Severity Info
2005415 - PTP operator with sidecar api configured throws bind: address already in use
2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console
2005554 - The switch status of the button "Show default project" is not revealed correctly in code
2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2005761 - QE - Implementing crw-basic feature file
2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow
2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty
2005854 - SSH NodePort service is created for each VM
2005901 - KS, KCM and KA going Degraded during master nodes upgrade
2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user
2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics
2005971 - Change telemeter to report the Application Services product usage metrics
2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files
2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased
2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types
2006101 - Power off fails for drivers that don't support Soft power off
2006243 - Metal IPI upgrade jobs are running out of disk space
2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address
2006308 - Backing Store YAML tab on click displays a blank screen on UI
2006325 - Multicast is broken across nodes
2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators
2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource
2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2006714 - add retry for etcd errors in kube-apiserver
2006767 - KubePodCrashLooping may not fire
2006803 - Set CoreDNS cache entries for forwarded zones
2006861 - Add Sprint 207 part 2 translations
2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap
2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors
2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails
2007271 - CI Integration for Knative test cases
2007289 - kubevirt tests are failing in CI
2007322 - Devfile/Dockerfile import does not work for unsupported git host
2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3.
2007379 - Events are not generated for master offset for ordinary clock
2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace
2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address
2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error
2007522 - No new local-storage-operator-metadata-container is build for 4.10
2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10
2007580 - Azure cilium installs are failing e2e tests
2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10
2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes
2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures
2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow
2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates
2007802 - AWS machine actuator get stuck if machine is completely missing
2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator
2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process
2008151 - Topology breaks on clicking in empty state
2008185 - Console operator go.mod should use go 1.16.version
2008201 - openstack-az job is failing on haproxy idle test
2008207 - vsphere CSI driver doesn't set resource limits
2008223 - gather_audit_logs: fix oc command line to get the current audit profile
2008235 - The Save button in the Edit DC form remains disabled
2008256 - Update Internationalization README with scope info
2008321 - Add correct documentation link for MON_DISK_LOW
2008462 - Disable PodSecurity feature gate for 4.10
2008490 - Backing store details page does not contain all the kebab actions.
2008521 - gcp-hostname service should correct invalid search entries in resolv.conf
2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount
2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror
2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers
2008599 - Azure Stack UPI does not have Internal Load Balancer
2008612 - Plugin asset proxy does not pass through browser cache headers
2008712 - VPA webhook timeout prevents all pods from starting
2008733 - kube-scheduler: exposed /debug/pprof port
2008911 - Prometheus repeatedly scaling prometheus-operator replica set
2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12
2009055 - Instances of OCS to be replaced with ODF on UI
2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs
2009083 - opm blocks pruning of existing bundles during add
2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances
2009131 - [e2e][automation] add more test about vmi
2009148 - [e2e][automation] test vm nic presets and options
2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator
2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family
2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted
2009384 - UI changes to support BindableKinds CRD changes
2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped
2009424 - Deployment upgrade is failing availability check
2009454 - Change web terminal subscription permissions from get to list
2009465 - container-selinux should come from rhel8-appstream
2009514 - Bump OVS to 2.16-15
2009555 - Supermicro X11 system not booting from vMedia with AI
2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points
2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow
2009699 - Failure to validate flavor RAM
2009754 - Footer is not sticky anymore in import forms
2009785 - CRI-O's version file should be pinned by MCO
2009791 - Installer: ibmcloud ignores install-config values
2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13
2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo
2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2009873 - Stale Logical Router Policies and Annotations for a given node
2009879 - There should be test-suite coverage to ensure admin-acks work as expected
2009888 - SRO package name collision between official and community version
2010073 - uninstalling and then reinstalling sriov-network-operator is not working
2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node.
2010181 - Environment variables not getting reset on reload on deployment edit form
2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2010341 - OpenShift Alerting Rules Style-Guide Compliance
2010342 - Local console builds can have out of memory errors
2010345 - OpenShift Alerting Rules Style-Guide Compliance
2010348 - Reverts PIE build mode for K8S components
2010352 - OpenShift Alerting Rules Style-Guide Compliance
2010354 - OpenShift Alerting Rules Style-Guide Compliance
2010359 - OpenShift Alerting Rules Style-Guide Compliance
2010368 - OpenShift Alerting Rules Style-Guide Compliance
2010376 - OpenShift Alerting Rules Style-Guide Compliance
2010662 - Cluster is unhealthy after image-registry-operator tests
2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)
2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API
2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address
2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing
2010864 - Failure building EFS operator
2010910 - ptp worker events unable to identify interface for multiple interfaces
2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24
2010921 - Azure Stack Hub does not handle additionalTrustBundle
2010931 - SRO CSV uses non default category "Drivers and plugins"
2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well.
2011038 - optional operator conditions are confusing
2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass
2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's
2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image
2011368 - Tooltip in pipeline visualization shows misleading data
2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels
2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards
2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster
2011513 - Kubelet rejects pods that use resources that should be freed by completed pods
2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine"
2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented
2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore
2011733 - Repository README points to broken documentarion link
2011753 - Ironic resumes clean before raid configuration job is actually completed
2011809 - The nodes page in the openshift console doesn't work. You just get a blank page
2011822 - Obfuscation doesn't work at clusters with OVN
2011882 - SRO helm charts not synced with templates
2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot
2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages
2011903 - vsphere-problem-detector: session leak
2011927 - OLM should allow users to specify a proxy for GRPC connections
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances
2011971 - ICNI2 pods are stuck in ContainerCreating state
2011972 - Ingress operator not creating wildcard route for hypershift clusters
2011977 - SRO bundle references non-existent image
2012069 - Refactoring Status controller
2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI
2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group
2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)"
2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig
2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off
2012407 - [e2e][automation] improve vm tab console tests
2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label
2012562 - migration condition is not detected in list view
2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written
2012780 - The port 50936 used by haproxy is occupied by kube-apiserver
2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working
2012902 - Neutron Ports assigned to Completed Pods are not reused Edit
2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack
2012971 - Disable operands deletes
2013034 - Cannot install to openshift-nmstate namespace
2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)
2013199 - post reboot of node SRIOV policy taking huge time
2013203 - UI breaks when trying to create block pool before storage cluster/system creation
2013222 - Full breakage for nightly payload promotion
2013273 - Nil pointer exception when phc2sys options are missing
2013321 - TuneD: high CPU utilization of the TuneD daemon.
2013416 - Multiple assets emit different content to the same filename
2013431 - Application selector dropdown has incorrect font-size and positioning
2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2013545 - Service binding created outside topology is not visible
2013599 - Scorecard support storage is not included in ocp4.9
2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)
2013646 - fsync controller will show false positive if gaps in metrics are observed.
2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default
2013751 - Service details page is showing wrong in-cluster hostname
2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page
2013871 - Resource table headings are not aligned with their column data
2013895 - Cannot enable accelerated network via MachineSets on Azure
2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)
2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain
2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)
2013996 - Project detail page: Action "Delete Project" does nothing for the default project
2014071 - Payload imagestream new tags not properly updated during cluster upgrade
2014153 - SRIOV exclusive pooling
2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace
2014238 - AWS console test is failing on importing duplicate YAML definitions
2014245 - Several aria-labels, external links, and labels aren't internationalized
2014248 - Several files aren't internationalized
2014352 - Could not filter out machine by using node name on machines page
2014464 - Unexpected spacing/padding below navigation groups in developer perspective
2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages
2014486 - Integration Tests: OLM single namespace operator tests failing
2014488 - Custom operator cannot change orders of condition tables
2014497 - Regex slows down different forms and creates too much recursion errors in the log
2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id'
2014614 - Metrics scraping requests should be assigned to exempt priority level
2014710 - TestIngressStatus test is broken on Azure
2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly
2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile
2015115 - [RFE] PCI passthrough
2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter
2015154 - Support ports defined networks and primarySubnet
2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic
2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production
2015386 - Possibility to add labels to the built-in OCP alerts
2015395 - Table head on Affinity Rules modal is not fully expanded
2015416 - CI implementation for Topology plugin
2015418 - Project Filesystem query returns No datapoints found
2015420 - No vm resource in project view's inventory
2015422 - No conflict checking on snapshot name
2015472 - Form and YAML view switch button should have distinguishable status
2015481 - [4.10] sriov-network-operator daemon pods are failing to start
2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting
2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English
2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click
2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu
2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain.
2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English
2015549 - Observe - Metrics: Column heading and pagination text is in English
2015557 - Workloads - DeploymentConfigs : Error message is in English
2015568 - Compute - Nodes : CPU column's values are in English
2015635 - Storage operator fails causing installation to fail on ASH
2015660 - "Finishing boot source customization" screen should not use term "patched"
2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node
2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin
2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning
2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud
2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch
2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail
2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)
2016008 - [4.10] Bootimage bump tracker
2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver
2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator
2016054 - No e2e CI presubmit configured for release component cluster-autoscaler
2016055 - No e2e CI presubmit configured for release component console
2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8"
2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager
2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers
2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters.
2016179 - Add Sprint 208 translations
2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager
2016235 - should update to 7.5.11 for grafana resources version label
2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails
2016334 - shiftstack: SRIOV nic reported as not supported
2016352 - Some pods start before CA resources are present
2016367 - Empty task box is getting created for a pipeline without finally task
2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts
2016438 - Feature flag gating is missing in few extensions contributed via knative plugin
2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc
2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets
2016453 - Complete i18n for GaugeChart defaults
2016479 - iface-id-ver is not getting updated for existing lsp
2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear
2016951 - dynamic actions list is not disabling "open console" for stopped vms
2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available
2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances
2017016 - [REF] Virtualization menu
2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn
2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly
2017130 - t is not a function error navigating to details page
2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue
2017244 - ovirt csi operator static files creation is in the wrong order
2017276 - [4.10] Volume mounts not created with the correct security context
2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed.
2017427 - NTO does not restart TuneD daemon when profile application is taking too long
2017535 - Broken Argo CD link image on GitOps Details Page
2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references
2017564 - On-prem prepender dispatcher script overwrites DNS search settings
2017565 - CCMO does not handle additionalTrustBundle on Azure Stack
2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice
2017606 - [e2e][automation] add test to verify send key for VNC console
2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes
2017656 - VM IP address is "undefined" under VM details -> ssh field
2017663 - SSH password authentication is disabled when public key is not supplied
2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP
2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set
2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource
2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults
2017761 - [e2e][automation] dummy bug for 4.9 test dependency
2017872 - Add Sprint 209 translations
2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances
2017879 - Add Chinese translation for "alternate"
2017882 - multus: add handling of pod UIDs passed from runtime
2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods
2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI
2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS
2018094 - the tooltip length is limited
2018152 - CNI pod is not restarted when It cannot start servers due to ports being used
2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time
2018234 - user settings are saved in local storage instead of on cluster
2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?)
2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)
2018275 - Topology graph doesn't show context menu for Export CSV
2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked
2018380 - Migrate docs links to access.redhat.com
2018413 - Error: context deadline exceeded, OCP 4.8.9
2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked
2018445 - [e2e][automation] enhance tests for downstream
2018446 - [e2e][automation] move tests to different level
2018449 - [e2e][automation] add test about create/delete network attachment definition
2018490 - [4.10] Image provisioning fails with file name too long
2018495 - Fix typo in internationalization README
2018542 - Kernel upgrade does not reconcile DaemonSet
2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit
2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes
2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950
2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10
2018985 - The rootdisk size is 15Gi of windows VM in customize wizard
2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync.
2019096 - Update SRO leader election timeout to support SNO
2019129 - SRO in operator hub points to wrong repo for README
2019181 - Performance profile does not apply
2019198 - ptp offset metrics are not named according to the log output
2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest
2019284 - Stop action should not in the action list while VMI is not running
2019346 - zombie processes accumulation and Argument list too long
2019360 - [RFE] Virtualization Overview page
2019452 - Logger object in LSO appends to existing logger recursively
2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect
2019634 - Pause and migration is enabled in action list for a user who has view only permission
2019636 - Actions in VM tabs should be disabled when user has view only permission
2019639 - "Take snapshot" should be disabled while VM image is still been importing
2019645 - Create button is not removed on "Virtual Machines" page for view only user
2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user
2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user
2019717 - cant delete VM with un-owned pvc attached
2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass
2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always"
2019744 - [RFE] Suggest users to download newest RHEL 8 version
2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level
2019827 - Display issue with top-level menu items running demo plugin
2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded
2019886 - Kuryr unable to finish ports recovery upon controller restart
2019948 - [RFE] Restructring Virtualization links
2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster
2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout
2019986 - Dynamic demo plugin fails to build
2019992 - instance:node_memory_utilisation:ratio metric is incorrect
2020001 - Update dockerfile for demo dynamic plugin to reflect dir change
2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation.
2020107 - cluster-version-operator: remove runlevel from CVO namespace
2020153 - Creation of Windows high performance VM fails
2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public
2020250 - Replacing deprecated ioutil
2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build
2020275 - ClusterOperators link in console returns blank page during upgrades
2020377 - permissions error while using tcpdump option with must-gather
2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined
2020498 - "Show PromQL" button is disabled
2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature
2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI
2020664 - DOWN subports are not cleaned up
2020904 - When trying to create a connection from the Developer view between VMs, it fails
2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana
2021017 - 404 page not found error on knative eventing page
2021031 - QE - Fix the topology CI scripts
2021048 - [RFE] Added MAC Spoof check
2021053 - Metallb operator presented as community operator
2021067 - Extensive number of requests from storage version operator in cluster
2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes
2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass
2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node
2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating
2021152 - imagePullPolicy is "Always" for ptp operator images
2021191 - Project admins should be able to list available network attachment defintions
2021205 - Invalid URL in git import form causes validation to not happen on URL change
2021322 - cluster-api-provider-azure should populate purchase plan information
2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind
2021364 - Installer requires invalid AWS permission s3:GetBucketReplication
2021400 - Bump documentationBaseURL to 4.10
2021405 - [e2e][automation] VM creation wizard Cloud Init editor
2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected
2021466 - [e2e][automation] Windows guest tool mount
2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver
2021551 - Build is not recognizing the USER group from an s2i image
2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character
2021629 - api request counts for current hour are incorrect
2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page
2021693 - Modals assigned modal-lg class are no longer the correct width
2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines
2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled
2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags
2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem
2022053 - dpdk application with vhost-net is not able to start
2022114 - Console logging every proxy request
2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)
2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long
2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error .
2022447 - ServiceAccount in manifests conflicts with OLM
2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules.
2022509 - getOverrideForManifest does not check manifest.GVK.Group
2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache
2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard
2022627 - Machine object not picking up external FIP added to an openstack vm
2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:'
2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox
2022801 - Add Sprint 210 translations
2022811 - Fix kubelet log rotation file handle leak
2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations
2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests
2022880 - Pipeline renders with minor visual artifact with certain task dependencies
2022886 - Incorrect URL in operator description
2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config
2023060 - [e2e][automation] Windows VM with CDROM migration
2023077 - [e2e][automation] Home Overview Virtualization status
2023090 - [e2e][automation] Examples of Import URL for VM templates
2023102 - [e2e][automation] Cloudinit disk of VM from custom template
2023216 - ACL for a deleted egressfirewall still present on node join switch
2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9
2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy
2023342 - SCC admission should take ephemeralContainers into account
2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden)
2023434 - Update Azure Machine Spec API to accept Marketplace Images
2023500 - Latency experienced while waiting for volumes to attach to node
2023522 - can't remove package from index: database is locked
2023560 - "Network Attachment Definitions" has no project field on the top in the list view
2023592 - [e2e][automation] add mac spoof check for nad
2023604 - ACL violation when deleting a provisioning-configuration resource
2023607 - console returns blank page when normal user without any projects visit Installed Operators page
2023638 - Downgrade support level for extended control plane integration to Dev Preview
2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10
2023675 - Changing CNV Namespace
2023779 - Fix Patch 104847 in 4.9
2023781 - initial hardware devices is not loading in wizard
2023832 - CCO updates lastTransitionTime for non-Status changes
2023839 - Bump recommended FCOS to 34.20211031.3.0
2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly
2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository
2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8
2024055 - External DNS added extra prefix for the TXT record
2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully
2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json
2024199 - 400 Bad Request error for some queries for the non admin user
2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode
2024262 - Sample catalog is not displayed when one API call to the backend fails
2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability
2024316 - modal about support displays wrong annotation
2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected
2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page
2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view
2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined
2024515 - test-blocker: Ceph-storage-plugin tests failing
2024535 - hotplug disk missing OwnerReference
2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image
2024547 - Detail page is breaking for namespace store , backing store and bucket class.
2024551 - KMS resources not getting created for IBM FlashSystem storage
2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel
2024613 - pod-identity-webhook starts without tls
2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
2024665 - Bindable services are not shown on topology
2024731 - linuxptp container: unnecessary checking of interfaces
2024750 - i18n some remaining OLM items
2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured
2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack
2024841 - test Keycloak with latest tag
2024859 - Not able to deploy an existing image from private image registry using developer console
2024880 - Egress IP breaks when network policies are applied
2024900 - Operator upgrade kube-apiserver
2024932 - console throws "Unauthorized" error after logging out
2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up
2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick
2025230 - ClusterAutoscalerUnschedulablePods should not be a warning
2025266 - CreateResource route has exact prop which need to be removed
2025301 - [e2e][automation] VM actions availability in different VM states
2025304 - overwrite storage section of the DV spec instead of the pvc section
2025431 - [RFE]Provide specific windows source link
2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36
2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node
2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local
2025481 - Update VM Snapshots UI
2025488 - [DOCS] Update the doc for nmstate operator installation
2025592 - ODC 4.9 supports invalid devfiles only
2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings"
2025767 - VMs orphaned during machineset scaleup
2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard
2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku.
2025821 - Make "Network Attachment Definitions" available to regular user
2025823 - The console nav bar ignores plugin separator in existing sections
2025830 - CentOS capitalizaion is wrong
2025837 - Warn users that the RHEL URL expire
2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-
2025903 - [UI] RoleBindings tab doesn't show correct rolebindings
2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2026178 - OpenShift Alerting Rules Style-Guide Compliance
2026209 - Updation of task is getting failed (tekton hub integration)
2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io"
2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates
2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct
2026352 - Kube-Scheduler revision-pruner fail during install of new cluster
2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment
2026383 - Error when rendering custom Grafana dashboard through ConfigMap
2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation
2026396 - Cachito Issues: sriov-network-operator Image build failure
2026488 - openshift-controller-manager - delete event is repeating pathologically
2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined.
2026560 - Cluster-version operator does not remove unrecognized volume mounts
2026699 - fixed a bug with missing metadata
2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator
2026898 - Description/details are missing for Local Storage Operator
2027132 - Use the specific icon for Fedora and CentOS template
2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend
2027272 - KubeMemoryOvercommit alert should be human readable
2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group
2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue)
2027299 - The status of checkbox component is not revealed correctly in code
2027311 - K8s watch hooks do not work when fetching core resources
2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation
2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images
2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation
2027498 - [IBMCloud] SG Name character length limitation
2027501 - [4.10] Bootimage bump tracker
2027524 - Delete Application doesn't delete Channels or Brokers
2027563 - e2e/add-flow-ci.feature fix accessibility violations
2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges
2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions
2027685 - openshift-cluster-csi-drivers pods crashing on PSI
2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced
2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string
2027917 - No settings in hostfirmwaresettings and schema objects for masters
2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf
2027982 - nncp stucked at ConfigurationProgressing
2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters
2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed
2028030 - Panic detected in cluster-image-registry-operator pod
2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found"
2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9
2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin
2028141 - Console tests doesn't pass on Node.js 15 and 16
2028160 - Remove i18nKey in network-policy-peer-selectors.tsx
2028162 - Add Sprint 210 translations
2028170 - Remove leading and trailing whitespace
2028174 - Add Sprint 210 part 2 translations
2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it
2028217 - Cluster-version operator does not default Deployment replicas to one
2028240 - Multiple CatalogSources causing higher CPU use than necessary
2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings
2028325 - disableDrain should be set automatically on SNO
2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel
2028531 - Missing netFilter to the list of parameters when platform is OpenStack
2028610 - Installer doesn't retry on GCP rate limiting
2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting
2028695 - destroy cluster does not prune bootstrap instance profile
2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs
2028802 - CRI-O panic due to invalid memory address or nil pointer dereference
2028816 - VLAN IDs not released on failures
2028881 - Override not working for the PerformanceProfile template
2028885 - Console should show an error context if it logs an error object
2028949 - Masthead dropdown item hover text color is incorrect
2028963 - Whereabouts should reconcile stranded IP addresses
2029034 - enabling ExternalCloudProvider leads to inoperative cluster
2029178 - Create VM with wizard - page is not displayed
2029181 - Missing CR from PGT
2029273 - wizard is not able to use if project field is "All Projects"
2029369 - Cypress tests github rate limit errors
2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out
2029394 - missing empty text for hardware devices at wizard review
2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used
2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl
2029521 - EFS CSI driver cannot delete volumes under load
2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle
2029579 - Clicking on an Application which has a Helm Release in it causes an error
2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE
2029645 - Sync upstream 1.15.0 downstream
2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing
2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip
2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage
2029785 - CVO panic when an edge is included in both edges and conditionaledges
2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)
2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error
2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace
2030228 - Fix StorageSpec resources field to use correct API
2030229 - Mirroring status card reflect wrong data
2030240 - Hide overview page for non-privileged user
2030305 - Export App job do not completes
2030347 - kube-state-metrics exposes metrics about resource annotations
2030364 - Shared resource CSI driver monitoring is not setup correctly
2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets
2030534 - Node selector/tolerations rules are evaluated too early
2030539 - Prometheus is not highly available
2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing
2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation
2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates.
2030677 - BOND CNI: There is no option to configure MTU on a Bond interface
2030692 - NPE in PipelineJobListener.upsertWorkflowJob
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030847 - PerformanceProfile API version should be v2
2030961 - Customizing the OAuth server URL does not apply to upgraded cluster
2031006 - Application name input field is not autofocused when user selects "Create application"
2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex
2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started
2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue
2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip
2031060 - Failing CSR Unit test due to expired test certificate
2031085 - ovs-vswitchd running more threads than expected
2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability
2031502 - [RFE] New common templates crash the ui
2031685 - Duplicated forward upstreams should be removed from the dns operator
2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
2031797 - [RFE] Order and text of Boot source type input are wrong
2031826 - CI tests needed to confirm driver-toolkit image contents
2031831 - OCP Console - Global CSS overrides affecting dynamic plugins
2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional
2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)
2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)
2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself
2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource
2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64
2032141 - open the alertrule link in new tab, got empty page
2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy
2032296 - Cannot create machine with ephemeral disk on Azure
2032407 - UI will show the default openshift template wizard for HANA template
2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded
2032421 - [RFE] UI integration with automatic updated images
2032516 - Not able to import git repo with .devfile.yaml
2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource
2032547 - hardware devices table have filter when table is empty
2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool
2032566 - Cluster-ingress-router does not support Azure Stack
2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso
2032589 - DeploymentConfigs ignore resolve-names annotation
2032732 - Fix styling conflicts due to recent console-wide CSS changes
2032831 - Knative Services and Revisions are not shown when Service has no ownerReference
2032851 - Networking is "not available" in Virtualization Overview
2032926 - Machine API components should use K8s 1.23 dependencies
2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24
2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster
2033013 - Project dropdown in user preferences page is broken
2033044 - Unable to change import strategy if devfile is invalid
2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable
2033111 - IBM VPC operator library bump removed global CLI args
2033138 - "No model registered for Templates" shows on customize wizard
2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected
2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected
2033257 - unable to use configmap for helm charts
2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered
2033290 - Product builds for console are failing
2033382 - MAPO is missing machine annotations
2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations
2033403 - Devfile catalog does not show provider information
2033404 - Cloud event schema is missing source type and resource field is using wrong value
2033407 - Secure route data is not pre-filled in edit flow form
2033422 - CNO not allowing LGW conversion from SGW in runtime
2033434 - Offer darwin/arm64 oc in clidownloads
2033489 - CCM operator failing on baremetal platform
2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver
2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains
2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady
2033538 - Gather Cost Management Metrics Custom Resource
2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined
2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page
2033634 - list-style-type: disc is applied to the modal dropdowns
2033720 - Update samples in 4.10
2033728 - Bump OVS to 2.16.0-33
2033729 - remove runtime request timeout restriction for azure
2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended
2033749 - Azure Stack Terraform fails without Local Provider
2033750 - Local volume should pull multi-arch image for kube-rbac-proxy
2033751 - Bump kubernetes to 1.23
2033752 - make verify fails due to missing yaml-patch
2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource
2034004 - [e2e][automation] add tests for VM snapshot improvements
2034068 - [e2e][automation] Enhance tests for 4.10 downstream
2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore
2034097 - [OVN] After edit EgressIP object, the status is not correct
2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning
2034129 - blank page returned when clicking 'Get started' button
2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0
2034153 - CNO does not verify MTU migration for OpenShiftSDN
2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled
2034170 - Use function.knative.dev for Knative Functions related labels
2034190 - unable to add new VirtIO disks to VMs
2034192 - Prometheus fails to insert reporting metrics when the sample limit is met
2034243 - regular user cant load template list
2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version"
2034248 - GPU/Host device modal is too small
2034257 - regular user Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat OpenShift Serverless 1.17.0 release of the OpenShift Serverless Operator.
Security Fix(es):
- golang: crypto/tls: certificate of wrong type is causing TLS client to panic (CVE-2021-34558)
- golang: net: lookup functions may return invalid host names (CVE-2021-33195)
- golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty (CVE-2021-33197)
- golang: match/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents (CVE-2021-33198)
- golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a custom TokenReader (CVE-2021-27918)
- golang: net/http: panic in ReadRequest and ReadResponse when reading a very large header (CVE-2021-31525)
- golang: archive/zip: malformed archive may cause panic or memory exhaustion (CVE-2021-33196)
It was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196 have been incorrectly mentioned as fixed in RHSA for Serverless client kn 1.16.0. This has been fixed (CVE-2021-3703). Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1983651 - Release of OpenShift Serverless Serving 1.17.0 1983654 - Release of OpenShift Serverless Eventing 1.17.0 1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names 1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty 1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents 1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196
- Description:
Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. STF then transmits the information to a centralized, receiving Red Hat OpenShift Container Platform (OCP) deployment for storage, retrieval, and monitoring. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0119", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "glibc", "scope": "lte", "trust": 1.0, "vendor": "gnu", "version": "2.32" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "500f", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "service processor", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "a250", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "fas/aff baseboard management controller 500f", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "c library", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "service processor", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "fabric operating system", "scope": null, "trust": 0.8, "vendor": "broadcom", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "fas/aff baseboard management controller a250", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:glibc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "2.32", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:service_processor:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:fabric_operating_system:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:a250_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:a250:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:500f_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:500f:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-25013" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "162634" }, { "db": "PACKETSTORM", "id": "163267" }, { "db": "PACKETSTORM", "id": "163188" }, { "db": "PACKETSTORM", "id": "163496" }, { "db": "PACKETSTORM", "id": "161254" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "168011" }, { "db": "CNNVD", "id": "CNNVD-202101-048" } ], "trust": 1.4 }, "cve": "CVE-2019-25013", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 7.1, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 6.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Complete", "baseScore": 7.1, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2019-25013", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2019-25013", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-25013", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-048", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2019-25013", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-25013" }, { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "CNNVD", "id": "CNNVD-202101-048" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The iconv feature in the GNU C Library (aka glibc or libc6) through 2.32, when processing invalid multi-byte input sequences in the EUC-KR encoding, may have a buffer over-read. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.4 Release Notes linked from the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1428290 - CVE-2016-10228 glibc: iconv program can hang when invoked with the -c option\n1684057 - CVE-2019-9169 glibc: regular-expression match via proceed_next_node in posix/regexec.c leads to heap-based buffer over-read\n1704868 - CVE-2016-10228 glibc: iconv: Fix converter hangs and front end option parsing for //TRANSLIT and //IGNORE [rhel-8]\n1855790 - glibc: Update Intel CET support from upstream\n1856398 - glibc: Build with -moutline-atomics on aarch64\n1868106 - glibc: Transaction ID collisions cause slow DNS lookups in getaddrinfo\n1871385 - glibc: Improve auditing implementation (including DT_AUDIT, and DT_DEPAUDIT)\n1871387 - glibc: Improve IBM POWER9 architecture performance\n1871394 - glibc: Fix AVX2 off-by-one error in strncmp (swbz#25933)\n1871395 - glibc: Improve IBM Z (s390x) Performance\n1871396 - glibc: Improve use of static TLS surplus for optimizations. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTRACING-1725 - Elasticsearch operator reports x509 errors communicating with ElasticSearch in OpenShift Service Mesh project\n\n6. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability\nengineers face as they work across a range of public and private cloud\nenvironments. \nClusters and applications are all visible and managed from a single\nconsole\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor\nthis release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.2/html/release_notes/\n\nSecurity fixes:\n\n* redisgraph-tls: redis: integer overflow when configurable limit for\nmaximum supported bulk input size is too big on 32-bit platforms\n(CVE-2021-21309)\n\n* console-header-container: nodejs-netmask: improper input validation of\noctal input data (CVE-2021-28092)\n\n* console-container: nodejs-is-svg: ReDoS via malicious string\n(CVE-2021-28918)\n\nBug fixes: \n\n* RHACM 2.2.4 images (BZ# 1957254)\n\n* Enabling observability for OpenShift Container Storage with RHACM 2.2 on\nOCP 4.7 (BZ#1950832)\n\n* ACM Operator should support using the default route TLS (BZ# 1955270)\n\n* The scrolling bar for search filter does not work properly (BZ# 1956852)\n\n* Limits on Length of MultiClusterObservability Resource Name (BZ# 1959426)\n\n* The proxy setup in install-config.yaml is not worked when IPI installing\nwith RHACM (BZ# 1960181)\n\n* Unable to make SSH connection to a Bitbucket server (BZ# 1966513)\n\n* Observability Thanos store shard crashing - cannot unmarshall DNS message\n(BZ# 1967890)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1950832 - Enabling observability for OpenShift Container Storage with RHACM 2.2 on OCP 4.7\n1952150 - [DDF] It would be great to see all the options available for the bucket configuration and which attributes are mandatory\n1954506 - [DDF] Table does not contain data about 20 clusters. Now it\u0027s difficult to estimate CPU usage with larger clusters\n1954535 - Reinstall Submariner - No endpoints found on one cluster\n1955270 - ACM Operator should support using the default route TLS\n1956852 - The scrolling bar for search filter does not work properly\n1957254 - RHACM 2.2.4 images\n1959426 - Limits on Length of MultiClusterObservability Resource Name\n1960181 - The proxy setup in install-config.yaml is not worked when IPI installing with RHACM. \n1963128 - [DDF] Please rename this to \"Amazon Elastic Kubernetes Service\"\n1966513 - Unable to make SSH connection to a Bitbucket server\n1967357 - [DDF] When I clicked on this yaml, I get a HTTP 404 error. \n1967890 - Observability Thanos store shard crashing - cannot unmarshal DNS message\n\n5. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe glibc packages provide the standard C libraries (libc), POSIX thread\nlibraries (libpthread), standard math libraries (libm), and the name\nservice cache daemon (nscd) used by multiple programs on the system. \nWithout these libraries, the Linux system cannot function correctly. \n\nBug Fix(es):\n\n* glibc: 64bit_strstr_via_64bit_strstr_sse2_unaligned detection fails with\nlarge device and inode numbers (BZ#1883162)\n\n* glibc: Performance regression in ebizzy benchmark (BZ#1889977)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the glibc library\nmust be restarted, or the system rebooted. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-static-2.17-322.el7_9.i686.rpm\nglibc-static-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nppc64:\nglibc-2.17-322.el7_9.ppc.rpm\nglibc-2.17-322.el7_9.ppc64.rpm\nglibc-common-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm\nglibc-devel-2.17-322.el7_9.ppc.rpm\nglibc-devel-2.17-322.el7_9.ppc64.rpm\nglibc-headers-2.17-322.el7_9.ppc64.rpm\nglibc-utils-2.17-322.el7_9.ppc64.rpm\nnscd-2.17-322.el7_9.ppc64.rpm\n\nppc64le:\nglibc-2.17-322.el7_9.ppc64le.rpm\nglibc-common-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm\nglibc-devel-2.17-322.el7_9.ppc64le.rpm\nglibc-headers-2.17-322.el7_9.ppc64le.rpm\nglibc-utils-2.17-322.el7_9.ppc64le.rpm\nnscd-2.17-322.el7_9.ppc64le.rpm\n\ns390x:\nglibc-2.17-322.el7_9.s390.rpm\nglibc-2.17-322.el7_9.s390x.rpm\nglibc-common-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390x.rpm\nglibc-devel-2.17-322.el7_9.s390.rpm\nglibc-devel-2.17-322.el7_9.s390x.rpm\nglibc-headers-2.17-322.el7_9.s390x.rpm\nglibc-utils-2.17-322.el7_9.s390x.rpm\nnscd-2.17-322.el7_9.s390x.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nglibc-debuginfo-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-2.17-322.el7_9.ppc64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64.rpm\nglibc-static-2.17-322.el7_9.ppc.rpm\nglibc-static-2.17-322.el7_9.ppc64.rpm\n\nppc64le:\nglibc-debuginfo-2.17-322.el7_9.ppc64le.rpm\nglibc-debuginfo-common-2.17-322.el7_9.ppc64le.rpm\nglibc-static-2.17-322.el7_9.ppc64le.rpm\n\ns390x:\nglibc-debuginfo-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-2.17-322.el7_9.s390x.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390.rpm\nglibc-debuginfo-common-2.17-322.el7_9.s390x.rpm\nglibc-static-2.17-322.el7_9.s390.rpm\nglibc-static-2.17-322.el7_9.s390x.rpm\n\nx86_64:\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-static-2.17-322.el7_9.i686.rpm\nglibc-static-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nglibc-2.17-322.el7_9.src.rpm\n\nx86_64:\nglibc-2.17-322.el7_9.i686.rpm\nglibc-2.17-322.el7_9.x86_64.rpm\nglibc-common-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-2.17-322.el7_9.x86_64.rpm\nglibc-debuginfo-common-2.17-322.el7_9.i686.rpm\nglibc-debuginfo-common-2.17-322.el7_9.x86_64.rpm\nglibc-devel-2.17-322.el7_9.i686.rpm\nglibc-devel-2.17-322.el7_9.x86_64.rpm\nglibc-headers-2.17-322.el7_9.x86_64.rpm\nglibc-utils-2.17-322.el7_9.x86_64.rpm\nnscd-2.17-322.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat OpenShift Serverless 1.17.0 release of the OpenShift Serverless\nOperator. \n\nSecurity Fix(es):\n\n* golang: crypto/tls: certificate of wrong type is causing TLS client to\npanic\n(CVE-2021-34558)\n* golang: net: lookup functions may return invalid host names\n(CVE-2021-33195)\n* golang: net/http/httputil: ReverseProxy forwards connection headers if\nfirst one is empty (CVE-2021-33197)\n* golang: match/big.Rat: may cause a panic or an unrecoverable fatal error\nif passed inputs with very large exponents (CVE-2021-33198)\n* golang: encoding/xml: infinite loop when using xml.NewTokenDecoder with a\ncustom TokenReader (CVE-2021-27918)\n* golang: net/http: panic in ReadRequest and ReadResponse when reading a\nvery large header (CVE-2021-31525)\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (CVE-2021-33196)\n\nIt was found that the CVE-2021-27918, CVE-2021-31525 and CVE-2021-33196\nhave been incorrectly mentioned as fixed in RHSA for Serverless client kn\n1.16.0. This has been fixed (CVE-2021-3703). Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1983651 - Release of OpenShift Serverless Serving 1.17.0\n1983654 - Release of OpenShift Serverless Eventing 1.17.0\n1989564 - CVE-2021-33195 golang: net: lookup functions may return invalid host names\n1989570 - CVE-2021-33197 golang: net/http/httputil: ReverseProxy forwards connection headers if first one is empty\n1989575 - CVE-2021-33198 golang: math/big.Rat: may cause a panic or an unrecoverable fatal error if passed inputs with very large exponents\n1992955 - CVE-2021-3703 serverless: incomplete fix for CVE-2021-27918 / CVE-2021-31525 / CVE-2021-33196\n\n5. Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. STF then transmits the information to a\ncentralized, receiving Red Hat OpenShift Container Platform (OCP)\ndeployment for storage, retrieval, and monitoring. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2019-25013" }, { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "VULMON", "id": "CVE-2019-25013" }, { "db": "PACKETSTORM", "id": "162634" }, { "db": "PACKETSTORM", "id": "163267" }, { "db": "PACKETSTORM", "id": "163188" }, { "db": "PACKETSTORM", "id": "163496" }, { "db": "PACKETSTORM", "id": "161254" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "168011" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-25013", "trust": 4.1 }, { "db": "ICS CERT", "id": "ICSA-23-166-10", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99464755", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2019-016179", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162634", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163267", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163496", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "161254", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164192", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168011", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163789", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163276", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "162837", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163406", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "162877", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0868", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6426", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2228", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2180", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0875", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0373", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0728", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0743", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2711", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1866", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3141", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4058", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2657", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1820", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5140", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1743", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4222", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2604", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1025", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2365", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2781", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022011038", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031430", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021071310", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021070604", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021062703", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021062315", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021071516", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021122914", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021092220", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202101-048", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2019-25013", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163188", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-25013" }, { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "PACKETSTORM", "id": "162634" }, { "db": "PACKETSTORM", "id": "163267" }, { "db": "PACKETSTORM", "id": "163188" }, { "db": "PACKETSTORM", "id": "163496" }, { "db": "PACKETSTORM", "id": "161254" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "168011" }, { "db": "CNNVD", "id": "CNNVD-202101-048" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "id": "VAR-202101-0119", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.43806562 }, "last_update_date": "2024-07-23T19:27:48.072000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Bug\u00a024973 NetAppNetApp\u00a0Advisory", "trust": 0.8, "url": "https://www.broadcom.com/" }, { "title": "GNU C Library Buffer error vulnerability fix", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=138312" }, { "title": "Debian CVElist Bug Report Logs: glibc: CVE-2019-25013", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=7073abdc63eae799f90555726b8fbe41" }, { "title": "Red Hat: Moderate: glibc security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210348 - security advisory" }, { "title": "Amazon Linux 2: ALAS2-2021-1599", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1599" }, { "title": "Ubuntu Security Notice: USN-5768-1: GNU C Library vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5768-1" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2019-25013 log" }, { "title": "Amazon Linux AMI: ALAS-2021-1511", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2021-1511" }, { "title": "Arch Linux Advisories: [ASA-202102-18] glibc: denial of service", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202102-18" }, { "title": "Arch Linux Advisories: [ASA-202102-17] lib32-glibc: denial of service", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202102-17" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.1.3 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210607 - security advisory" }, { "title": "Amazon Linux 2: ALAS2-2021-1605", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1605" }, { "title": "Ubuntu Security Notice: USN-5310-1: GNU C Library vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5310-1" }, { "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225924 - security advisory" }, { "title": "IBM: Security Bulletin: Cloud Pak for Security contains security vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=08f19f0be4d5dcf7486e5abcdb671477" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2019-25013 " }, { "title": "ecr-api", "trust": 0.1, "url": "https://github.com/yalespinup/ecr-api " }, { "title": "sanction", "trust": 0.1, "url": "https://github.com/ctc-oss/sanction " }, { "title": "release-the-code-litecoin", "trust": 0.1, "url": "https://github.com/brandoncamenisch/release-the-code-litecoin " }, { "title": "interview_project", "trust": 0.1, "url": "https://github.com/domyrtille/interview_project " }, { "title": "trivy-multiscanner", "trust": 0.1, "url": "https://github.com/onzack/trivy-multiscanner " }, { "title": "spring-boot-app-with-log4j-vuln", "trust": 0.1, "url": "https://github.com/nedenwalker/spring-boot-app-with-log4j-vuln " }, { "title": "giant-squid", "trust": 0.1, "url": "https://github.com/dispera/giant-squid " }, { "title": "devops-demo", "trust": 0.1, "url": "https://github.com/epequeno/devops-demo " }, { "title": "spring-boot-app-using-gradle", "trust": 0.1, "url": "https://github.com/nedenwalker/spring-boot-app-using-gradle " }, { "title": "xyz-solutions", "trust": 0.1, "url": "https://github.com/sauliuspr/xyz-solutions " }, { "title": "myapp-container-jaxrs", "trust": 0.1, "url": "https://github.com/akiraabe/myapp-container-jaxrs " } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-25013" }, { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "CNNVD", "id": "CNNVD-202101-048" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-125", "trust": 1.0 }, { "problemtype": "Out-of-bounds read (CWE-125) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 1.6, "url": "https://security.netapp.com/advisory/ntap-20210205-0004/" }, { "trust": 1.6, "url": "https://security.gentoo.org/glsa/202107-07" }, { "trust": 1.6, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.6, "url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00021.html" }, { "trust": 1.6, "url": "https://sourceware.org/bugzilla/show_bug.cgi?id=24973" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r32d767ac804e9b8aad4355bb85960a6a1385eab7afff549a5e98660f%40%3cjira.kafka.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r448bb851cc8e6e3f93f3c28c70032b37062625d81214744474ac49e7%40%3cdev.kafka.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r4806a391091e082bdea17266452ca656ebc176e51bb3932733b3a0a2%40%3cjira.kafka.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r499e4f96d0b5109ef083f2feccd33c51650c1b7d7068aa3bd47efca9%40%3cjira.kafka.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r5af4430421bb6f9973294691a7904bbd260937e9eef96b20556f43ff%40%3cjira.kafka.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r750eee18542bc02bd8350861c424ee60a9b9b225568fa09436a37ece%40%3cissues.zookeeper.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r7a2e94adfe0a2f0a1d42e4927e8c32ecac97d37db9cb68095fe9ddbc%40%3cdev.zookeeper.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rd2354f9ccce41e494fbadcbc5ad87218de6ec0fff8a7b54c8462226c%40%3cissues.zookeeper.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772%40%3cdev.mina.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/4y6tx47p47kabsfol26fldnvcwxdkdez/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/tvcunlq3hxgs4vpuqkwtjgraw2ktfgxs/" }, { "trust": 1.0, "url": "https://sourceware.org/git/?p=glibc.git%3ba=commit%3bh=ee7a3144c9922808181009b7b3e50e852fb4999b" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu99464755/index.html" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-166-10" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r5af4430421bb6f9973294691a7904bbd260937e9eef96b20556f43ff@%3cjira.kafka.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r7a2e94adfe0a2f0a1d42e4927e8c32ecac97d37db9cb68095fe9ddbc@%3cdev.zookeeper.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r448bb851cc8e6e3f93f3c28c70032b37062625d81214744474ac49e7@%3cdev.kafka.apache.org%3e" }, { "trust": 0.6, "url": "https://sourceware.org/git/?p=glibc.git;a=commit;h=ee7a3144c9922808181009b7b3e50e852fb4999b" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r4806a391091e082bdea17266452ca656ebc176e51bb3932733b3a0a2@%3cjira.kafka.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772@%3cdev.mina.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/rd2354f9ccce41e494fbadcbc5ad87218de6ec0fff8a7b54c8462226c@%3cissues.zookeeper.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/tvcunlq3hxgs4vpuqkwtjgraw2ktfgxs/" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r750eee18542bc02bd8350861c424ee60a9b9b225568fa09436a37ece@%3cissues.zookeeper.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r499e4f96d0b5109ef083f2feccd33c51650c1b7d7068aa3bd47efca9@%3cjira.kafka.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/4y6tx47p47kabsfol26fldnvcwxdkdez/" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r32d767ac804e9b8aad4355bb85960a6a1385eab7afff549a5e98660f@%3cjira.kafka.apache.org%3e" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164192/red-hat-security-advisory-2021-3556-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168011/red-hat-security-advisory-2022-5924-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163789/red-hat-security-advisory-2021-3119-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-cloud-pak-for-security-contains-security-vulnerabilities/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1866" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2657" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1743" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1820" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2711" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021071310" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163747/red-hat-security-advisory-2021-3016-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2781" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5140" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0373/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031430" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2365" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2180" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021122914" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162634/red-hat-security-advisory-2021-1585-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163276/red-hat-security-advisory-2021-2543-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0875" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/glibc-out-of-bounds-memory-reading-via-iconv-euc-kr-encoding-34360" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1025" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0728" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163496/red-hat-security-advisory-2021-2705-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0743" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2228" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021062703" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021092220" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0868" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6520474" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2604" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162837/red-hat-security-advisory-2021-2136-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163267/red-hat-security-advisory-2021-2532-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022011038" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/161254/red-hat-security-advisory-2021-0348-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021070604" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021071516" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162877/red-hat-security-advisory-2021-2121-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021062315" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4058" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4222" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163406/gentoo-linux-security-advisory-202107-07.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3141" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6426" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3326" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27918" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31525" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31525" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27918" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33196" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33196" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1585" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3114" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21639" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28165" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25037" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10878" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28935" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28163" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25035" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25038" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3543" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3501" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25042" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25041" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25036" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27170" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24331" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25692" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-2433" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3347" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24332" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25039" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25040" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2461" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2705" }, { "trust": 0.1, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10029" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10029" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0348" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29573" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33195" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3556" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3421" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3703" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5924" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "PACKETSTORM", "id": "162634" }, { "db": "PACKETSTORM", "id": "163267" }, { "db": "PACKETSTORM", "id": "163188" }, { "db": "PACKETSTORM", "id": "163496" }, { "db": "PACKETSTORM", "id": "161254" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "168011" }, { "db": "CNNVD", "id": "CNNVD-202101-048" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2019-25013" }, { "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "db": "PACKETSTORM", "id": "162634" }, { "db": "PACKETSTORM", "id": "163267" }, { "db": "PACKETSTORM", "id": "163188" }, { "db": "PACKETSTORM", "id": "163496" }, { "db": "PACKETSTORM", "id": "161254" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164192" }, { "db": "PACKETSTORM", "id": "168011" }, { "db": "CNNVD", "id": "CNNVD-202101-048" }, { "db": "NVD", "id": "CVE-2019-25013" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2019-25013" }, { "date": "2021-09-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "date": "2021-05-19T13:59:56", "db": "PACKETSTORM", "id": "162634" }, { "date": "2021-06-23T16:08:25", "db": "PACKETSTORM", "id": "163267" }, { "date": "2021-06-17T17:53:22", "db": "PACKETSTORM", "id": "163188" }, { "date": "2021-07-14T15:02:07", "db": "PACKETSTORM", "id": "163496" }, { "date": "2021-02-02T16:12:10", "db": "PACKETSTORM", "id": "161254" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-09-17T16:04:56", "db": "PACKETSTORM", "id": "164192" }, { "date": "2022-08-09T14:36:05", "db": "PACKETSTORM", "id": "168011" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-048" }, { "date": "2021-01-04T18:15:13.027000", "db": "NVD", "id": "CVE-2019-25013" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-09T00:00:00", "db": "VULMON", "id": "CVE-2019-25013" }, { "date": "2023-06-16T05:32:00", "db": "JVNDB", "id": "JVNDB-2019-016179" }, { "date": "2022-12-12T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-048" }, { "date": "2023-11-09T14:44:33.733000", "db": "NVD", "id": "CVE-2019-25013" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "168011" }, { "db": "CNNVD", "id": "CNNVD-202101-048" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "GNU\u00a0C\u00a0Library\u00a0 Out-of-bounds read vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2019-016179" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "buffer error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-048" } ], "trust": 0.6 } }
var-202105-1457
Vulnerability from variot
A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1775 - [release-5.2] Syslog output is serializing json incorrectly LOG-1824 - [release-5.2] Rejected by Elasticsearch and unexpected json-parsing LOG-1963 - [release-5.2] CLO panic: runtime error: slice bounds out of range [:-1] LOG-1970 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64
- Description:
The libwebp packages provide a library and tools for the WebP graphics format. WebP is an image format with a lossy compression of digital photographic images. WebP consists of a codec based on the VP8 format, and a container based on the Resource Interchange File Format (RIFF). Webmasters, web developers and browser developers can use WebP to compress, archive, and distribute digital images more efficiently.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-4930-1 security@debian.org https://www.debian.org/security/ Moritz Muehlenhoff June 10, 2021 https://www.debian.org/security/faq
Package : libwebp CVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332
Multiple vulnerabilities were discovered in libwebp, the implementation of the WebP image format, which could result in denial of service, memory disclosure or potentially the execution of arbitrary code if malformed images are processed.
For the stable distribution (buster), these problems have been fixed in version 0.6.1-2+deb10u1.
We recommend that you upgrade your libwebp packages.
For the detailed security status of libwebp please refer to its security tracker page at: https://security-tracker.debian.org/tracker/libwebp
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8 TjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE HI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g AvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k nHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC ha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X FK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs eC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj 0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c twImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ PnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V dmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ= =pN/j -----END PGP SIGNATURE----- . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can
failed to check the operatorcondition/status
resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools
content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy
in oc import-image
without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node
does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13
to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize
property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade
should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version
is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor
, os
and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: MissingShow details on source websitepanel.styles
attribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommand
should exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explain
output of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirror
command 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=true
does not work foroc adm release new
, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccess
anduseAccessReview
doc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy
2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release new
failed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1457", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.7" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "libwebp", "scope": "lt", "trust": 1.0, "vendor": "webmproject", "version": "1.0.1" }, { "model": "libwebp", "scope": null, "trust": 0.8, "vendor": "the webm", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "ontap select deploy administration utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null }, { "model": "ios", "scope": null, "trust": 0.8, "vendor": "\u30a2\u30c3\u30d7\u30eb", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:webmproject:libwebp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.7", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-36330" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 0.6 }, "cve": "CVE-2020-36330", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 6.4, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 4.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 6.4, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-36330", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 6.4, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "VHN-391909", "impactScore": 4.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.1, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.2, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.1, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-36330", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-36330", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202105-1386", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-391909", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-36330", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-391909" }, { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1775 - [release-5.2] Syslog output is serializing json incorrectly\nLOG-1824 - [release-5.2] Rejected by Elasticsearch and unexpected json-parsing\nLOG-1963 - [release-5.2] CLO panic: runtime error: slice bounds out of range [:-1]\nLOG-1970 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libwebp packages provide a library and tools for the WebP graphics\nformat. WebP is an image format with a lossy compression of digital\nphotographic images. WebP consists of a codec based on the VP8 format, and\na container based on the Resource Interchange File Format (RIFF). \nWebmasters, web developers and browser developers can use WebP to compress,\narchive, and distribute digital images more efficiently. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4930-1 security@debian.org\nhttps://www.debian.org/security/ Moritz Muehlenhoff\nJune 10, 2021 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : libwebp\nCVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 \n CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 \n CVE-2020-36331 CVE-2020-36332\n\nMultiple vulnerabilities were discovered in libwebp, the implementation\nof the WebP image format, which could result in denial of service, memory\ndisclosure or potentially the execution of arbitrary code if malformed\nimages are processed. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 0.6.1-2+deb10u1. \n\nWe recommend that you upgrade your libwebp packages. \n\nFor the detailed security status of libwebp please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/libwebp\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8\nTjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE\nHI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g\nAvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k\nnHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC\nha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X\nFK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs\neC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj\n0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c\ntwImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ\nPnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V\ndmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ=\n=pN/j\n-----END PGP SIGNATURE-----\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID: RHSA-2022:5069-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5069\nIssue date: 2022-08-10\nCVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "VULHUB", "id": "VHN-391909" }, { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-36330", "trust": 4.1 }, { "db": "PACKETSTORM", "id": "164842", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "165287", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2018-016580", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162900", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163076", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.7 }, { "db": "CNNVD", "id": "CNNVD-202105-1386", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.3977", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1965", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4254", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2485.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1880", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3905", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1914", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3789", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.0245", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1959", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4229", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021072216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021061301", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021060725", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "163645", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-391909", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-36330", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165296", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169076", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168042", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391909" }, { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "id": "VAR-202105-1457", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-391909" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T21:10:31.569000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "HT212601 Apple\u00a0 Security update", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "title": "libwebp Buffer error vulnerability fix", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=151883" }, { "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "CNNVD", "id": "CNNVD-202105-1386" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-125", "trust": 1.1 }, { "problemtype": "Out-of-bounds read (CWE-125) [NVD Evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-391909" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956853" }, { "trust": 1.9, "url": "https://www.debian.org/security/2021/dsa-4930" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20211104-0004/" }, { "trust": 1.8, "url": "https://support.apple.com/kb/ht212601" }, { "trust": 1.8, "url": "http://seclists.org/fulldisclosure/2021/jul/54" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html" }, { "trust": 1.8, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html" }, { "trust": 1.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.0245" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3977" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1959" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165287/red-hat-security-advisory-2021-5127-05.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libwebp-five-vulnerabilities-35580" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1965" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/165286/red-hat-security-advisory-2021-5128-06.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3789" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3905" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1914" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4229" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht212601" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1880" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021061301" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4254" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2102" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163076/ubuntu-security-notice-usn-4971-2.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162900/ubuntu-security-notice-usn-4971-1.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164842/red-hat-security-advisory-2021-4231-04.html" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.3, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.2, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36332" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/125.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5127" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5137" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4231" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3575" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30758" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-12973" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30689" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30682" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-18032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1801" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30749" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30795" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-5785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30744" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30797" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21779" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29623" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27828" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1871" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29338" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30734" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26926" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24870" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-1789" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30663" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30799" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3272" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0202" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27824" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/libwebp" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43818" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26945" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38593" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23648" }, { "trust": 0.1, "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4156" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5069" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29162" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://10.0.0.7:2379" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1706" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30323" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" } ], "sources": [ { "db": "VULHUB", "id": "VHN-391909" }, { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-391909" }, { "db": "VULMON", "id": "CVE-2020-36330" }, { "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "db": "PACKETSTORM", "id": "165287" }, { "db": "PACKETSTORM", "id": "165296" }, { "db": "PACKETSTORM", "id": "164842" }, { "db": "PACKETSTORM", "id": "165631" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "169076" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "db": "NVD", "id": "CVE-2020-36330" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-21T00:00:00", "db": "VULHUB", "id": "VHN-391909" }, { "date": "2021-05-21T00:00:00", "db": "VULMON", "id": "CVE-2020-36330" }, { "date": "2022-01-27T00:00:00", "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "date": "2021-12-15T15:20:43", "db": "PACKETSTORM", "id": "165287" }, { "date": "2021-12-15T15:27:05", "db": "PACKETSTORM", "id": "165296" }, { "date": "2021-11-10T17:05:32", "db": "PACKETSTORM", "id": "164842" }, { "date": "2022-01-20T17:48:29", "db": "PACKETSTORM", "id": "165631" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2021-06-28T19:12:00", "db": "PACKETSTORM", "id": "169076" }, { "date": "2022-08-10T15:56:22", "db": "PACKETSTORM", "id": "168042" }, { "date": "2021-05-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "date": "2021-05-21T17:15:08.353000", "db": "NVD", "id": "CVE-2020-36330" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-11-30T00:00:00", "db": "VULHUB", "id": "VHN-391909" }, { "date": "2021-11-30T00:00:00", "db": "VULMON", "id": "CVE-2020-36330" }, { "date": "2022-01-27T08:54:00", "db": "JVNDB", "id": "JVNDB-2018-016580" }, { "date": "2022-12-09T00:00:00", "db": "CNNVD", "id": "CNNVD-202105-1386" }, { "date": "2021-11-30T19:43:36.433000", "db": "NVD", "id": "CVE-2020-36330" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1386" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libwebp\u00a0 Out-of-bounds read vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2018-016580" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "buffer error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202105-1386" } ], "trust": 0.6 } }
var-202105-1325
Vulnerability from variot
In ISC DHCP 4.1-ESV-R1 -> 4.1-ESV-R16, ISC DHCP 4.4.0 -> 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted "on the wire" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: dhcp security update Advisory ID: RHSA-2021:2469-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:2469 Issue date: 2021-06-17 CVE Names: CVE-2021-25217 =====================================================================
- Summary:
An update for dhcp is now available for Red Hat Enterprise Linux 7.6 Advanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update Support, and Red Hat Enterprise Linux 7.6 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.6) - x86_64
- Description:
The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network.
Security Fix(es):
- dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient (CVE-2021-25217)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
ppc64le: dhclient-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-common-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-libs-4.2.5-69.el7_6.1.ppc64le.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.6):
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.6):
ppc64le: dhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-devel-4.2.5-69.el7_6.1.ppc64le.rpm
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.6):
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25217 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYMs0KtzjgjWX9erEAQis7xAAhh3MBohMBq6bZd6sPasNG4rPX+Xh5AWf D+6WNTQLV1u1IU4ZzGKVMtBNSfCd8m727z/L0d4wBof06ngUXHkdR4AEzn5uuWSz lHzlgbpmvqxeBnXrHOG1WE43JNXHSsj0u8eARsLxEU4/rxnbLVOj5dMJkdWmXN61 DocHHFVw6GmdZSCr6/tLjvG57fWtVLQF4SpEdhXz55iNZ1l6y09FDtoom/FuXIcG VnsUpsu/iWMFaUaVQH3sFVLksl39IrHFQxvskXR+FHAPzb8vVuKyNihJ5b3BUhfh jTUKPxLO+X0/K9+cNFVSuSTPr7eHpRRHdUbFIHcUB0s1ACOnmvHr6G8FaVAi9BQZ 6hzWcOFOZS7fF4TnXF3q0yDAKApRwlyF1PP21u1XdCb17Z4+E2LZF0nqnbb3hCxV JfnsadNc2Re/gc3u1bOGQb56ylc7LC74BeMDoJSeldqdPeT5JUc8XRRCyWHjVcjD Bj1kD90FbD3Z3jRAvASgKg4KU1xqEZidHyL/qHo9YTS0h9lqc2iWb0n3/4RU0E8k OuNPpWxkzt1uGQl3iJbQH4TOsIQtqoDFOaCaPMbol44fnm69Q52zRBBr6AHVhEcY iOpTa2PUFK3FLfhkfUCHcCRVXqXeewefcODTWs2Jwx6/sl7nsZpWMNlV8+rdUmXR BuvubM0bUt8= =mdD7 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 6 ELS) - i386, s390x, x86_64
- These packages include redhat-release-virtualization-host. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- ========================================================================= Ubuntu Security Notice USN-4969-2 May 27, 2021
isc-dhcp vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
DHCP could be made to crash if it received specially crafted network traffic.
Software Description: - isc-dhcp: DHCP server and client
Details:
USN-4969-1 fixed a vulnerability in DHCP. This update provides the corresponding update for Ubuntu 14.04 ESM and 16.04 ESM.
Original advisory details:
Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly handled lease file parsing. A remote attacker could possibly use this issue to cause DHCP to crash, resulting in a denial of service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: isc-dhcp-client 4.3.3-5ubuntu12.10+esm1 isc-dhcp-server 4.3.3-5ubuntu12.10+esm1
Ubuntu 14.04 ESM: isc-dhcp-client 4.2.4-7ubuntu12.13+esm1 isc-dhcp-server 4.2.4-7ubuntu12.13+esm1
In general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64
-
8) - aarch64, noarch, ppc64le, s390x, x86_64
-
Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- Gentoo Linux Security Advisory GLSA 202305-22
https://security.gentoo.org/
Severity: Normal Title: ISC DHCP: Multiple Vulnerabilities Date: May 03, 2023 Bugs: #875521, #792324 ID: 202305-22
Synopsis
Multiple vulnerabilities have been discovered in ISC DHCP, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/dhcp < 4.4.3_p1 >= 4.4.3_p1
Description
Multiple vulnerabilities have been discovered in ISC DHCP. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC DHCP users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/dhcp-4.4.3_p1"
References
[ 1 ] CVE-2021-25217 https://nvd.nist.gov/vuln/detail/CVE-2021-25217 [ 2 ] CVE-2022-2928 https://nvd.nist.gov/vuln/detail/CVE-2022-2928 [ 3 ] CVE-2022-2929 https://nvd.nist.gov/vuln/detail/CVE-2022-2929
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202305-22
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1325", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "dhcp", "scope": "eq", "trust": 1.0, "vendor": "isc", "version": "4.1-esv" }, { "model": "ruggedcom rox rx1500", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1511", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1400", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1536", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx5000", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "ruggedcom rox rx1512", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx5000", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "dhcp", "scope": "lte", "trust": 1.0, "vendor": "isc", "version": "4.4.2" }, { "model": "ruggedcom rox rx1524", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1501", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1501", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1510", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox mx5000", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1512", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox mx5000", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ruggedcom rox rx1510", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ruggedcom rox rx1500", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "ruggedcom rox rx1511", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "dhcp", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "4.4.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_rc2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10_rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12_p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11rc2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12-p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r16:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "4.4.2", "versionStartIncluding": "4.4.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15-p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1500_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1501_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1501:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1510_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1510:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1511_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1511:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1512_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1512:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1524_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1524:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1536_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1536:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx5000_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx5000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_mx5000_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_mx5000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" } ], "trust": 0.9 }, "cve": "CVE-2021-25217", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "ADJACENT_NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 3.3, "confidentialityImpact": "NONE", "exploitabilityScore": 6.5, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "LOW", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:A/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "ADJACENT_NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "PARTIAL", "baseScore": 3.3, "confidentialityImpact": "NONE", "exploitabilityScore": 6.5, "id": "CVE-2021-25217", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "LOW", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:A/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "ADJACENT_NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.4, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 2.8, "impactScore": 4.0, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:C/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-25217", "trust": 1.0, "value": "HIGH" }, { "author": "security-officer@isc.org", "id": "CVE-2021-25217", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2021-25217", "trust": 0.1, "value": "LOW" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "NVD", "id": "CVE-2021-25217" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In ISC DHCP 4.1-ESV-R1 -\u003e 4.1-ESV-R16, ISC DHCP 4.4.0 -\u003e 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted \"on the wire\" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: dhcp security update\nAdvisory ID: RHSA-2021:2469-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2469\nIssue date: 2021-06-17\nCVE Names: CVE-2021-25217 \n=====================================================================\n\n1. Summary:\n\nAn update for dhcp is now available for Red Hat Enterprise Linux 7.6\nAdvanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.6 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.6) - x86_64\n\n3. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. \n\nSecurity Fix(es):\n\n* dhcp: stack-based buffer overflow when parsing statements with\ncolon-separated hex digits in config or lease files in dhcpd and dhclient\n(CVE-2021-25217)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient\n\n6. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nppc64le:\ndhclient-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-common-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-libs-4.2.5-69.el7_6.1.ppc64le.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6):\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6):\n\nppc64le:\ndhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-devel-4.2.5-69.el7_6.1.ppc64le.rpm\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6):\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25217\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYMs0KtzjgjWX9erEAQis7xAAhh3MBohMBq6bZd6sPasNG4rPX+Xh5AWf\nD+6WNTQLV1u1IU4ZzGKVMtBNSfCd8m727z/L0d4wBof06ngUXHkdR4AEzn5uuWSz\nlHzlgbpmvqxeBnXrHOG1WE43JNXHSsj0u8eARsLxEU4/rxnbLVOj5dMJkdWmXN61\nDocHHFVw6GmdZSCr6/tLjvG57fWtVLQF4SpEdhXz55iNZ1l6y09FDtoom/FuXIcG\nVnsUpsu/iWMFaUaVQH3sFVLksl39IrHFQxvskXR+FHAPzb8vVuKyNihJ5b3BUhfh\njTUKPxLO+X0/K9+cNFVSuSTPr7eHpRRHdUbFIHcUB0s1ACOnmvHr6G8FaVAi9BQZ\n6hzWcOFOZS7fF4TnXF3q0yDAKApRwlyF1PP21u1XdCb17Z4+E2LZF0nqnbb3hCxV\nJfnsadNc2Re/gc3u1bOGQb56ylc7LC74BeMDoJSeldqdPeT5JUc8XRRCyWHjVcjD\nBj1kD90FbD3Z3jRAvASgKg4KU1xqEZidHyL/qHo9YTS0h9lqc2iWb0n3/4RU0E8k\nOuNPpWxkzt1uGQl3iJbQH4TOsIQtqoDFOaCaPMbol44fnm69Q52zRBBr6AHVhEcY\niOpTa2PUFK3FLfhkfUCHcCRVXqXeewefcODTWs2Jwx6/sl7nsZpWMNlV8+rdUmXR\nBuvubM0bUt8=\n=mdD7\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 6 ELS) - i386, s390x, x86_64\n\n3. \nThese packages include redhat-release-virtualization-host. \nRHVH features a Cockpit user interface for monitoring the host\u0027s resources\nand\nperforming administrative tasks. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n4. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n5. =========================================================================\nUbuntu Security Notice USN-4969-2\nMay 27, 2021\n\nisc-dhcp vulnerability\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nDHCP could be made to crash if it received specially crafted network\ntraffic. \n\nSoftware Description:\n- isc-dhcp: DHCP server and client\n\nDetails:\n\nUSN-4969-1 fixed a vulnerability in DHCP. This update provides\nthe corresponding update for Ubuntu 14.04 ESM and 16.04 ESM. \n\n\nOriginal advisory details:\n\n Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly\n handled lease file parsing. A remote attacker could possibly use this issue\n to cause DHCP to crash, resulting in a denial of service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n isc-dhcp-client 4.3.3-5ubuntu12.10+esm1\n isc-dhcp-server 4.3.3-5ubuntu12.10+esm1\n\nUbuntu 14.04 ESM:\n isc-dhcp-client 4.2.4-7ubuntu12.13+esm1\n isc-dhcp-server 4.2.4-7ubuntu12.13+esm1\n\nIn general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64\n\n3. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202305-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: ISC DHCP: Multiple Vulnerabilities\n Date: May 03, 2023\n Bugs: #875521, #792324\n ID: 202305-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC DHCP, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/dhcp \u003c 4.4.3_p1 \u003e= 4.4.3_p1\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC DHCP. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC DHCP users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/dhcp-4.4.3_p1\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25217\n https://nvd.nist.gov/vuln/detail/CVE-2021-25217\n[ 2 ] CVE-2022-2928\n https://nvd.nist.gov/vuln/detail/CVE-2022-2928\n[ 3 ] CVE-2022-2929\n https://nvd.nist.gov/vuln/detail/CVE-2022-2929\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2021-25217" }, { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" } ], "trust": 1.98 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-25217", "trust": 2.2 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-406691", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/05/26/6", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-25217", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163196", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163240", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163400", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162841", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163137", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163140", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163052", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172130", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "id": "VAR-202105-1325", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.366531175 }, "last_update_date": "2024-07-23T20:55:14.082000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Debian CVElist Bug Report Logs: isc-dhcp: CVE-2021-25217: A buffer overrun in lease file parsing code can be used to exploit a common vulnerability shared by dhcpd and dhclient", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=b55bb445f71f0d88702845d3582e2b5c" }, { "title": "Amazon Linux AMI: ALAS-2021-1510", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2021-1510" }, { "title": "Amazon Linux 2: ALAS2-2021-1654", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1654" }, { "title": "Red Hat: CVE-2021-25217", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-25217" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-25217 log" }, { "title": "Palo Alto Networks Security Advisory: PAN-SA-2024-0001 Informational Bulletin: Impact of OSS CVEs in PAN-OS", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=34f98e4f4344c97599fe2d33618956a7" }, { "title": "Completion for lacework", "trust": 0.1, "url": "https://github.com/fbreton/lacework " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-119", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202305-22" }, { "trust": 1.1, "url": "https://kb.isc.org/docs/cve-2021-25217" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/05/26/6" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00002.html" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-406691.pdf" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220325-0011/" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/z2lb42jwiv4m4wdnxx5vgip26feywkif/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5qi4dyc7j4bghew3nh4xhmwthyc36uk4/" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25217" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.6, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/119.html" }, { "trust": 0.1, "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=989157" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2021-1510.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2419" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24489" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24489" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2519" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2554" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2555" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-4969-1" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-4969-2" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2405" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2418" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2415" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2359" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2929" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2928" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-26T00:00:00", "db": "VULMON", "id": "CVE-2021-25217" }, { "date": "2021-06-17T18:09:00", "db": "PACKETSTORM", "id": "163196" }, { "date": "2021-06-15T15:01:13", "db": "PACKETSTORM", "id": "163151" }, { "date": "2021-06-22T19:32:24", "db": "PACKETSTORM", "id": "163240" }, { "date": "2021-07-06T15:19:09", "db": "PACKETSTORM", "id": "163400" }, { "date": "2021-05-27T13:30:42", "db": "PACKETSTORM", "id": "162841" }, { "date": "2021-06-14T15:49:07", "db": "PACKETSTORM", "id": "163129" }, { "date": "2021-06-15T14:41:42", "db": "PACKETSTORM", "id": "163137" }, { "date": "2021-06-15T14:44:42", "db": "PACKETSTORM", "id": "163140" }, { "date": "2021-06-09T13:43:47", "db": "PACKETSTORM", "id": "163052" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2023-05-03T15:37:18", "db": "PACKETSTORM", "id": "172130" }, { "date": "2021-05-26T22:15:07.947000", "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-25217" }, { "date": "2023-11-07T03:31:24.893000", "db": "NVD", "id": "CVE-2021-25217" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "162841" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-2469-01", "sources": [ { "db": "PACKETSTORM", "id": "163196" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow", "sources": [ { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" } ], "trust": 0.8 } }
var-202101-0595
Vulnerability from variot
There's a flaw in bfd_pef_parse_function_stubs of bfd/pef.c in binutils in versions prior to 2.34 which could allow an attacker who is able to submit a crafted file to be processed by objdump to cause a NULL pointer dereference. The greatest threat of this flaw is to application availability. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202107-24
https://security.gentoo.org/
Severity: Normal Title: Binutils: Multiple vulnerabilities Date: July 10, 2021 Bugs: #678806, #761957, #764170 ID: 202107-24
Synopsis
Multiple vulnerabilities have been found in Binutils, the worst of which could result in a Denial of Service condition.
Background
The GNU Binutils are a collection of tools to create, modify and analyse binary files. Many of the files use BFD, the Binary File Descriptor library, to do low-level manipulation.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 sys-devel/binutils < 2.35.2 >= 2.35.2
Description
Multiple vulnerabilities have been discovered in Binutils. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Binutils users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=sys-devel/binutils-2.35.2"
References
[ 1 ] CVE-2019-9070 https://nvd.nist.gov/vuln/detail/CVE-2019-9070 [ 2 ] CVE-2019-9071 https://nvd.nist.gov/vuln/detail/CVE-2019-9071 [ 3 ] CVE-2019-9072 https://nvd.nist.gov/vuln/detail/CVE-2019-9072 [ 4 ] CVE-2019-9073 https://nvd.nist.gov/vuln/detail/CVE-2019-9073 [ 5 ] CVE-2019-9074 https://nvd.nist.gov/vuln/detail/CVE-2019-9074 [ 6 ] CVE-2019-9075 https://nvd.nist.gov/vuln/detail/CVE-2019-9075 [ 7 ] CVE-2019-9076 https://nvd.nist.gov/vuln/detail/CVE-2019-9076 [ 8 ] CVE-2019-9077 https://nvd.nist.gov/vuln/detail/CVE-2019-9077 [ 9 ] CVE-2020-19599 https://nvd.nist.gov/vuln/detail/CVE-2020-19599 [ 10 ] CVE-2020-35448 https://nvd.nist.gov/vuln/detail/CVE-2020-35448 [ 11 ] CVE-2020-35493 https://nvd.nist.gov/vuln/detail/CVE-2020-35493 [ 12 ] CVE-2020-35494 https://nvd.nist.gov/vuln/detail/CVE-2020-35494 [ 13 ] CVE-2020-35495 https://nvd.nist.gov/vuln/detail/CVE-2020-35495 [ 14 ] CVE-2020-35496 https://nvd.nist.gov/vuln/detail/CVE-2020-35496 [ 15 ] CVE-2020-35507 https://nvd.nist.gov/vuln/detail/CVE-2020-35507
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202107-24
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202101-0595", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise linux", "scope": "eq", "trust": 1.0, "vendor": "redhat", "version": "8.0" }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire\\, enterprise sds \\\u0026 hci storage node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": "lt", "trust": 1.0, "vendor": "gnu", "version": "2.34" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy utility", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "binutils", "scope": null, "trust": 0.8, "vendor": "gnu", "version": null }, { "model": "solidfire \u0026 hci management node", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "red hat enterprise linux", "scope": null, "trust": 0.8, "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8", "version": null }, { "model": "hci bootstrap os", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "NVD", "id": "CVE-2020-35507" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gnu:binutils:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.34", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-35507" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "163455" } ], "trust": 0.1 }, "cve": "CVE-2020-35507", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35507", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-377703", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 1.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 5.5, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-35507", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-35507", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202101-049", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-377703", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-35507", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-377703" }, { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "NVD", "id": "CVE-2020-35507" }, { "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There\u0027s a flaw in bfd_pef_parse_function_stubs of bfd/pef.c in binutils in versions prior to 2.34 which could allow an attacker who is able to submit a crafted file to be processed by objdump to cause a NULL pointer dereference. The greatest threat of this flaw is to application availability. binutils Has NULL A pointer dereference vulnerability exists.Denial of service (DoS) It may be put into a state. GNU Binutils (GNU Binary Utilities or binutils) is a set of programming language tool programs developed by the GNU community. The program is primarily designed to handle object files in various formats and provides linkers, assemblers, and other tools for object files and archives. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202107-24\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: Binutils: Multiple vulnerabilities\n Date: July 10, 2021\n Bugs: #678806, #761957, #764170\n ID: 202107-24\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Binutils, the worst of\nwhich could result in a Denial of Service condition. \n\nBackground\n==========\n\nThe GNU Binutils are a collection of tools to create, modify and\nanalyse binary files. Many of the files use BFD, the Binary File\nDescriptor library, to do low-level manipulation. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 sys-devel/binutils \u003c 2.35.2 \u003e= 2.35.2 \n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Binutils. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Binutils users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=sys-devel/binutils-2.35.2\"\n\nReferences\n==========\n\n[ 1 ] CVE-2019-9070\n https://nvd.nist.gov/vuln/detail/CVE-2019-9070\n[ 2 ] CVE-2019-9071\n https://nvd.nist.gov/vuln/detail/CVE-2019-9071\n[ 3 ] CVE-2019-9072\n https://nvd.nist.gov/vuln/detail/CVE-2019-9072\n[ 4 ] CVE-2019-9073\n https://nvd.nist.gov/vuln/detail/CVE-2019-9073\n[ 5 ] CVE-2019-9074\n https://nvd.nist.gov/vuln/detail/CVE-2019-9074\n[ 6 ] CVE-2019-9075\n https://nvd.nist.gov/vuln/detail/CVE-2019-9075\n[ 7 ] CVE-2019-9076\n https://nvd.nist.gov/vuln/detail/CVE-2019-9076\n[ 8 ] CVE-2019-9077\n https://nvd.nist.gov/vuln/detail/CVE-2019-9077\n[ 9 ] CVE-2020-19599\n https://nvd.nist.gov/vuln/detail/CVE-2020-19599\n[ 10 ] CVE-2020-35448\n https://nvd.nist.gov/vuln/detail/CVE-2020-35448\n[ 11 ] CVE-2020-35493\n https://nvd.nist.gov/vuln/detail/CVE-2020-35493\n[ 12 ] CVE-2020-35494\n https://nvd.nist.gov/vuln/detail/CVE-2020-35494\n[ 13 ] CVE-2020-35495\n https://nvd.nist.gov/vuln/detail/CVE-2020-35495\n[ 14 ] CVE-2020-35496\n https://nvd.nist.gov/vuln/detail/CVE-2020-35496\n[ 15 ] CVE-2020-35507\n https://nvd.nist.gov/vuln/detail/CVE-2020-35507\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202107-24\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n\n", "sources": [ { "db": "NVD", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "VULHUB", "id": "VHN-377703" }, { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "PACKETSTORM", "id": "163455" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-35507", "trust": 2.7 }, { "db": "PACKETSTORM", "id": "163455", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-015102", "trust": 0.8 }, { "db": "AUSCERT", "id": "ESB-2021.3660", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202101-049", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-377703", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-35507", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377703" }, { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35507" }, { "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "id": "VAR-202101-0595", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-377703" } ], "trust": 0.01 }, "last_update_date": "2023-12-18T11:38:08.735000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NTAP-20210212-0007 Red hat Red\u00a0Hat\u00a0Bugzilla", "trust": 0.8, "url": "https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=7a0fb7be96e0ce79e1ae429bc1ba913e5244d537" }, { "title": "GNU binutils Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=138313" }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "NULL Pointer dereference (CWE-476) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-377703" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "NVD", "id": "CVE-2020-35507" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.9, "url": "https://security.gentoo.org/glsa/202107-24" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210212-0007/" }, { "trust": 1.8, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1911691" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35507" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics/" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/gnu-binutils-null-pointer-dereference-via-bfd-pef-parse-function-stubs-36788" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3660" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-analytics-for-nps/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163455/gentoo-linux-security-advisory-202107-24.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-gnu-binutils-affect-ibm-netezza-performance-server/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/476.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2020-35507" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35495" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9071" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9077" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9073" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9072" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35448" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9074" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9070" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35496" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9076" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9075" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35494" } ], "sources": [ { "db": "VULHUB", "id": "VHN-377703" }, { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35507" }, { "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-377703" }, { "db": "VULMON", "id": "CVE-2020-35507" }, { "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "db": "PACKETSTORM", "id": "163455" }, { "db": "NVD", "id": "CVE-2020-35507" }, { "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-01-04T00:00:00", "db": "VULHUB", "id": "VHN-377703" }, { "date": "2021-01-04T00:00:00", "db": "VULMON", "id": "CVE-2020-35507" }, { "date": "2021-09-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "date": "2021-07-11T12:01:11", "db": "PACKETSTORM", "id": "163455" }, { "date": "2021-01-04T15:15:15.200000", "db": "NVD", "id": "CVE-2020-35507" }, { "date": "2021-01-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-24T00:00:00", "db": "VULHUB", "id": "VHN-377703" }, { "date": "2022-09-02T00:00:00", "db": "VULMON", "id": "CVE-2020-35507" }, { "date": "2021-09-10T06:56:00", "db": "JVNDB", "id": "JVNDB-2020-015102" }, { "date": "2023-01-24T16:10:32.143000", "db": "NVD", "id": "CVE-2020-35507" }, { "date": "2022-09-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202101-049" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-049" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "binutils\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-015102" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202101-049" } ], "trust": 0.6 } }
var-201912-1044
Vulnerability from variot
xmlParseBalancedChunkMemoryRecover in parser.c in libxml2 before 2.9.10 has a memory leak related to newDoc->oldNs. Summary:
Openshift Serverless 1.10.2 is now available. Solution:
See the documentation at: https://access.redhat.com/documentation/en-us/openshift_container_platform/ 4.5/html/serverless_applications/index
- Bugs fixed (https://bugzilla.redhat.com/):
1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918761 - CVE-2021-3115 golang: cmd/go: packages using cgo can cause arbitrary code execution at build time
- Solution:
Download the release images via:
quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3
- Bugs fixed (https://bugzilla.redhat.com/):
1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display
- JIRA issues fixed (https://issues.jboss.org/):
PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version
Bug Fix(es):
-
Gather image registry config (backport to 4.3) (BZ#1836815)
-
Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist (BZ#1849176)
-
Login with OpenShift not working after cluster upgrade (BZ#1852429)
-
Limit the size of gathered federated metrics from alerts in Insights Operator (BZ#1874018)
-
[4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs (BZ#1879110)
-
[release 4.3] OpenShift APIs become unavailable for more than 15 minutes after one of master nodes went down(OAuth) (BZ#1880293)
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-x86_64
The image digest is sha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-s390x The image digest is sha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le
The image digest is sha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc
- Bugs fixed (https://bugzilla.redhat.com/):
1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1836815 - Gather image registry config (backport to 4.3) 1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist 1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator 1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized 1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs
Bug Fix(es):
-
Configuring the system with non-RT kernel will hang the system (BZ#1923220)
-
Bugs fixed (https://bugzilla.redhat.com/):
1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service
- JIRA issues fixed (https://issues.jboss.org/):
CNF-802 - Infrastructure-provided enablement/disablement of interrupt processing for guaranteed pod CPUs CNF-854 - Performance tests in CNF Tests
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
The compliance-operator image updates are now available for OpenShift Container Platform 4.6.
This advisory provides the following updates among others:
- Enhances profile parsing time.
- Fixes excessive resource consumption from the Operator.
- Fixes default content image.
- Fixes outdated remediation handling. Solution:
For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):
1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module 1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values 1916813 - CVE-2021-20191 ansible: multiple modules expose secured values 1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option 1939349 - CVE-2021-3447 ansible: multiple modules expose secured values
- ========================================================================== Ubuntu Security Notice USN-4274-1 February 10, 2020
libxml2 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 19.10
- Ubuntu 18.04 LTS
- Ubuntu 16.04 LTS
- Ubuntu 14.04 ESM
- Ubuntu 12.04 ESM
Summary:
Several security issues were fixed in libxml2. An attacker could possibly use this issue to cause a denial of service. (CVE-2019-19956, CVE-2020-7595)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 19.10: libxml2 2.9.4+dfsg1-7ubuntu3.1 libxml2-utils 2.9.4+dfsg1-7ubuntu3.1
Ubuntu 18.04 LTS: libxml2 2.9.4+dfsg1-6.1ubuntu1.3 libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.3
Ubuntu 16.04 LTS: libxml2 2.9.3+dfsg1-1ubuntu0.7 libxml2-utils 2.9.3+dfsg1-1ubuntu0.7
Ubuntu 14.04 ESM: libxml2 2.9.1+dfsg1-3ubuntu4.13+esm1 libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm1
Ubuntu 12.04 ESM: libxml2 2.7.8.dfsg-5.1ubuntu4.22 libxml2-utils 2.7.8.dfsg-5.1ubuntu4.22
In general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: libxml2 security and bug fix update Advisory ID: RHSA-2020:3996-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:3996 Issue date: 2020-09-29 CVE Names: CVE-2019-19956 CVE-2019-20388 CVE-2020-7595 ==================================================================== 1. Summary:
An update for libxml2 is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The libxml2 library is a development toolbox providing the implementation of various XML standards.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The desktop must be restarted (log out, then log back in) for this update to take effect.
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
ppc64: libxml2-2.9.1-6.el7.5.ppc.rpm libxml2-2.9.1-6.el7.5.ppc64.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm libxml2-devel-2.9.1-6.el7.5.ppc.rpm libxml2-devel-2.9.1-6.el7.5.ppc64.rpm libxml2-python-2.9.1-6.el7.5.ppc64.rpm
ppc64le: libxml2-2.9.1-6.el7.5.ppc64le.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm libxml2-devel-2.9.1-6.el7.5.ppc64le.rpm libxml2-python-2.9.1-6.el7.5.ppc64le.rpm
s390x: libxml2-2.9.1-6.el7.5.s390.rpm libxml2-2.9.1-6.el7.5.s390x.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm libxml2-devel-2.9.1-6.el7.5.s390.rpm libxml2-devel-2.9.1-6.el7.5.s390x.rpm libxml2-python-2.9.1-6.el7.5.s390x.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: libxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm libxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm libxml2-static-2.9.1-6.el7.5.ppc.rpm libxml2-static-2.9.1-6.el7.5.ppc64.rpm
ppc64le: libxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm libxml2-static-2.9.1-6.el7.5.ppc64le.rpm
s390x: libxml2-debuginfo-2.9.1-6.el7.5.s390.rpm libxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm libxml2-static-2.9.1-6.el7.5.s390.rpm libxml2-static-2.9.1-6.el7.5.s390x.rpm
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: libxml2-2.9.1-6.el7.5.src.rpm
x86_64: libxml2-2.9.1-6.el7.5.i686.rpm libxml2-2.9.1-6.el7.5.x86_64.rpm libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-devel-2.9.1-6.el7.5.i686.rpm libxml2-devel-2.9.1-6.el7.5.x86_64.rpm libxml2-python-2.9.1-6.el7.5.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: libxml2-debuginfo-2.9.1-6.el7.5.i686.rpm libxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm libxml2-static-2.9.1-6.el7.5.i686.rpm libxml2-static-2.9.1-6.el7.5.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBX3OgG9zjgjWX9erEAQg9vhAAiDkPkj6VlpMKDvgVUY4eU83p4bCnZqos e9kVjDMJrHdYR5iXXc665LOYBG0yyDGdvVLeqxjI9S11UDypRyzy641kwBY6eCru 0yaA88aZ4YpQyIARmmK7cIMFe6JRWHOkEsOfMCtjpbkGLteXdzfUFgJnlRFB0Dai OVrZH3kGb5EbKvJGcWY7cqv5jQhpy802a4EhpHQ1q6vFAbO7D1T6vJlCyP0+ba5N ZoMyrCFWaX5TUjiwFkuyAiSZYyPyxo0+dhqgJaSU44BH4p5imV7c1oh10U7/7k+O Y30M2uLOuArD1ad0t2d23EVr8mRKUr+agoLWC8Pwuq2worTArE/395GKXv2Yvtv9 YCvvCNFIcnG5GaJloqhXkTZM2pCr0+n90WLrNZ0suPArycHU74ROfBNErWegvq2e gpFLyu3S1mpjcBG19Gjg1qgh7FKg57s7PbNzcETK5ParBQeZ4dHHpcr9voP52tYD SJ9ILV9unM5jya5Uwooa6GOFGistLQLntZd22zDcPahu0FxvQlyZFV4oInF0m/7h e/h8NgSwyJKNenZATlsOGmjdcMh95Unztu4bfK8S20/Ej8F/B2PE4Kxha2s0bxsC b9fFKBOIdTCeFi2lTyrctEGQl9ksrW/Va6+uQwe5lKQldwhB3of9QolUu7ud+gdx COt/fBH012Y=udpL -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . Solution:
For information on upgrading Ansible Tower, reference the Ansible Tower Upgrade and Migration Guide: https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/ index.html
4
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1044", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "real user experience insight", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.3.1.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.04" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "30" }, { "model": "libxml2", "scope": "lt", "trust": 1.0, "vendor": "xmlsoft", "version": "2.9.10" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "8.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "14.04" }, { "model": "sinema remote connect server", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "3.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "16.04" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "clustered data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "steelstore cloud integrated storage", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "19.10" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "12.04" }, { "model": "manageability software development kit", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19956" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:xmlsoft:libxml2:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.9.10", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:real_user_experience_insight:13.3.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:12.04:*:*:*:-:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:manageability_software_development_kit:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinema_remote_connect_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19956" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "160889" }, { "db": "PACKETSTORM", "id": "159661" }, { "db": "PACKETSTORM", "id": "161548" }, { "db": "PACKETSTORM", "id": "161429" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "159349" }, { "db": "PACKETSTORM", "id": "159552" } ], "trust": 0.8 }, "cve": "CVE-2019-19956", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "CVE-2019-19956", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.1, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "id": "CVE-2019-19956", "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2019-19956", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-201912-1088", "trust": 0.6, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2019-19956", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "db": "NVD", "id": "CVE-2019-19956" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "xmlParseBalancedChunkMemoryRecover in parser.c in libxml2 before 2.9.10 has a memory leak related to newDoc-\u003eoldNs. Summary:\n\nOpenshift Serverless 1.10.2 is now available. Solution:\n\nSee the documentation at:\nhttps://access.redhat.com/documentation/en-us/openshift_container_platform/\n4.5/html/serverless_applications/index\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918761 - CVE-2021-3115 golang: cmd/go: packages using cgo can cause arbitrary code execution at build time\n\n5. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. \n\nBug Fix(es):\n\n* Gather image registry config (backport to 4.3) (BZ#1836815)\n\n* Builds fail after running postCommit script if OCP cluster is configured\nwith a container registry whitelist (BZ#1849176)\n\n* Login with OpenShift not working after cluster upgrade (BZ#1852429)\n\n* Limit the size of gathered federated metrics from alerts in Insights\nOperator (BZ#1874018)\n\n* [4.3] Storage operator stops reconciling when going Upgradeable=False on\nv1alpha1 CRDs (BZ#1879110)\n\n* [release 4.3] OpenShift APIs become unavailable for more than 15 minutes\nafter one of master nodes went down(OAuth) (BZ#1880293)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-x86_64\n\nThe image digest is\nsha256:9ff90174a170379e90a9ead6e0d8cf6f439004191f80762764a5ca3dbaab01dc\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-s390x\nThe image digest is\nsha256:605ddde0442e604cfe2d6bd1541ce48df5956fe626edf9cc95b1fca75d231b64\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.3.40-ppc64le\n\nThe image digest is\nsha256:d3c9e391c145338eae3feb7f6a4e487dadc8139a353117d642fe686d277bcccc\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1836815 - Gather image registry config (backport to 4.3)\n1849176 - Builds fail after running postCommit script if OCP cluster is configured with a container registry whitelist\n1874018 - Limit the size of gathered federated metrics from alerts in Insights Operator\n1874399 - [DR] etcd-member-recover.sh fails to pull image with unauthorized\n1879110 - [4.3] Storage operator stops reconciling when going Upgradeable=False on v1alpha1 CRDs\n\n5. \n\nBug Fix(es):\n\n* Configuring the system with non-RT kernel will hang the system\n(BZ#1923220)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nCNF-802 - Infrastructure-provided enablement/disablement of interrupt processing for guaranteed pod CPUs\nCNF-854 - Performance tests in CNF Tests\n\n6. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThe compliance-operator image updates are now available for OpenShift\nContainer Platform 4.6. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1914774 - CVE-2021-20178 ansible: user data leak in snmp_facts module\n1915808 - CVE-2021-20180 ansible module: bitbucket_pipeline_variable exposes secured values\n1916813 - CVE-2021-20191 ansible: multiple modules expose secured values\n1925002 - CVE-2021-20228 ansible: basic.py no_log with fallback option\n1939349 - CVE-2021-3447 ansible: multiple modules expose secured values\n\n5. ==========================================================================\nUbuntu Security Notice USN-4274-1\nFebruary 10, 2020\n\nlibxml2 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 19.10\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 LTS\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in libxml2. \nAn attacker could possibly use this issue to cause a denial of service. \n(CVE-2019-19956, CVE-2020-7595)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 19.10:\n libxml2 2.9.4+dfsg1-7ubuntu3.1\n libxml2-utils 2.9.4+dfsg1-7ubuntu3.1\n\nUbuntu 18.04 LTS:\n libxml2 2.9.4+dfsg1-6.1ubuntu1.3\n libxml2-utils 2.9.4+dfsg1-6.1ubuntu1.3\n\nUbuntu 16.04 LTS:\n libxml2 2.9.3+dfsg1-1ubuntu0.7\n libxml2-utils 2.9.3+dfsg1-1ubuntu0.7\n\nUbuntu 14.04 ESM:\n libxml2 2.9.1+dfsg1-3ubuntu4.13+esm1\n libxml2-utils 2.9.1+dfsg1-3ubuntu4.13+esm1\n\nUbuntu 12.04 ESM:\n libxml2 2.7.8.dfsg-5.1ubuntu4.22\n libxml2-utils 2.7.8.dfsg-5.1ubuntu4.22\n\nIn general, a standard system update will make all the necessary changes. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: libxml2 security and bug fix update\nAdvisory ID: RHSA-2020:3996-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2020:3996\nIssue date: 2020-09-29\nCVE Names: CVE-2019-19956 CVE-2019-20388 CVE-2020-7595\n====================================================================\n1. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. \n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nppc64:\nlibxml2-2.9.1-6.el7.5.ppc.rpm\nlibxml2-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-python-2.9.1-6.el7.5.ppc64.rpm\n\nppc64le:\nlibxml2-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-devel-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-python-2.9.1-6.el7.5.ppc64le.rpm\n\ns390x:\nlibxml2-2.9.1-6.el7.5.s390.rpm\nlibxml2-2.9.1-6.el7.5.s390x.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm\nlibxml2-devel-2.9.1-6.el7.5.s390.rpm\nlibxml2-devel-2.9.1-6.el7.5.s390x.rpm\nlibxml2-python-2.9.1-6.el7.5.s390x.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc64.rpm\n\nppc64le:\nlibxml2-debuginfo-2.9.1-6.el7.5.ppc64le.rpm\nlibxml2-static-2.9.1-6.el7.5.ppc64le.rpm\n\ns390x:\nlibxml2-debuginfo-2.9.1-6.el7.5.s390.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.s390x.rpm\nlibxml2-static-2.9.1-6.el7.5.s390.rpm\nlibxml2-static-2.9.1-6.el7.5.s390x.rpm\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nlibxml2-2.9.1-6.el7.5.src.rpm\n\nx86_64:\nlibxml2-2.9.1-6.el7.5.i686.rpm\nlibxml2-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-devel-2.9.1-6.el7.5.i686.rpm\nlibxml2-devel-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-python-2.9.1-6.el7.5.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nlibxml2-debuginfo-2.9.1-6.el7.5.i686.rpm\nlibxml2-debuginfo-2.9.1-6.el7.5.x86_64.rpm\nlibxml2-static-2.9.1-6.el7.5.i686.rpm\nlibxml2-static-2.9.1-6.el7.5.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX3OgG9zjgjWX9erEAQg9vhAAiDkPkj6VlpMKDvgVUY4eU83p4bCnZqos\ne9kVjDMJrHdYR5iXXc665LOYBG0yyDGdvVLeqxjI9S11UDypRyzy641kwBY6eCru\n0yaA88aZ4YpQyIARmmK7cIMFe6JRWHOkEsOfMCtjpbkGLteXdzfUFgJnlRFB0Dai\nOVrZH3kGb5EbKvJGcWY7cqv5jQhpy802a4EhpHQ1q6vFAbO7D1T6vJlCyP0+ba5N\nZoMyrCFWaX5TUjiwFkuyAiSZYyPyxo0+dhqgJaSU44BH4p5imV7c1oh10U7/7k+O\nY30M2uLOuArD1ad0t2d23EVr8mRKUr+agoLWC8Pwuq2worTArE/395GKXv2Yvtv9\nYCvvCNFIcnG5GaJloqhXkTZM2pCr0+n90WLrNZ0suPArycHU74ROfBNErWegvq2e\ngpFLyu3S1mpjcBG19Gjg1qgh7FKg57s7PbNzcETK5ParBQeZ4dHHpcr9voP52tYD\nSJ9ILV9unM5jya5Uwooa6GOFGistLQLntZd22zDcPahu0FxvQlyZFV4oInF0m/7h\ne/h8NgSwyJKNenZATlsOGmjdcMh95Unztu4bfK8S20/Ej8F/B2PE4Kxha2s0bxsC\nb9fFKBOIdTCeFi2lTyrctEGQl9ksrW/Va6+uQwe5lKQldwhB3of9QolUu7ud+gdx\nCOt/fBH012Y=udpL\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nFor information on upgrading Ansible Tower, reference the Ansible Tower\nUpgrade and Migration Guide:\nhttps://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/\nindex.html\n\n4", "sources": [ { "db": "NVD", "id": "CVE-2019-19956" }, { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "160889" }, { "db": "PACKETSTORM", "id": "159661" }, { "db": "PACKETSTORM", "id": "161548" }, { "db": "PACKETSTORM", "id": "161429" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "156276" }, { "db": "PACKETSTORM", "id": "159349" }, { "db": "PACKETSTORM", "id": "159552" } ], "trust": 1.8 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2019-19956", "trust": 2.6 }, { "db": "SIEMENS", "id": "SSA-292794", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-21-103-08", "trust": 1.7 }, { "db": "PACKETSTORM", "id": "162694", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "160889", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "159661", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "161429", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "162142", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "156276", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "159349", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.0584", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.3732", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1207", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3535", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2604", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1744", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.4513", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1242", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1727", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4058", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.1826", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.2162", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3364", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0234", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3631", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.2475", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0864", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0471", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0845", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.0025", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3868", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0986", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3550", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0691", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.4100", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3102", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0319", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1193", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0171", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0099", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "159851", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "160961", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "159553", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "162130", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "161536", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "161727", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "158168", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "161916", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "160125", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021041514", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021052216", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072097", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021111735", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-201912-1088", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2019-19956", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161548", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "159552", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "160889" }, { "db": "PACKETSTORM", "id": "159661" }, { "db": "PACKETSTORM", "id": "161548" }, { "db": "PACKETSTORM", "id": "161429" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "156276" }, { "db": "PACKETSTORM", "id": "159349" }, { "db": "PACKETSTORM", "id": "159552" }, { "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "db": "NVD", "id": "CVE-2019-19956" } ] }, "id": "VAR-201912-1044", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.54152095 }, "last_update_date": "2023-11-07T19:16:36.131000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "libxml2 Security vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=106417" }, { "title": "Red Hat: Moderate: libxml2 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204479 - security advisory" }, { "title": "Red Hat: Moderate: libxml2 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20203996 - security advisory" }, { "title": "Ubuntu Security Notice: libxml2 vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-4274-1" }, { "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20202646 - security advisory" }, { "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.37 SP3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20202644 - security advisory" }, { "title": "Amazon Linux AMI: ALAS-2020-1438", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2020-1438" }, { "title": "Amazon Linux 2: ALAS2-2020-1534", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2020-1534" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=0d160980ab72db34060d62c89304b6f2" }, { "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.11.0", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205149 - security advisory" }, { "title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.6 runner release (CVE-2019-18874)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204255 - security advisory" }, { "title": "Red Hat: Moderate: security update - Red Hat Ansible Tower 3.7 runner release (CVE-2019-18874)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204254 - security advisory" }, { "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.12.0", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210146 - security advisory" }, { "title": "Red Hat: Low: OpenShift Container Platform 4.3.40 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20204264 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210190 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210050 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20210436 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20205605 - security advisory" }, { "title": "IBM: Security Bulletin: IBM Security Guardium is affected by multiple vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3201548b0e11fd3ecd83fd36fc045a8e" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "CNNVD", "id": "CNNVD-201912-1088" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-401", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2019-19956" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://usn.ubuntu.com/4274-1/" }, { "trust": 2.3, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-103-08" }, { "trust": 1.7, "url": "https://gitlab.gnome.org/gnome/libxml2/commit/5a02583c7e683896d84878bd90641d8d9b0d0549" }, { "trust": 1.7, "url": "https://lists.debian.org/debian-lts-announce/2019/12/msg00032.html" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20200114-0002/" }, { "trust": 1.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5r55zr52rmbx24tqtwhciwkjvrv6yawi/" }, { "trust": 1.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jdpf3aavkuakdyfmfksiqsvvs3eefpqh/" }, { "trust": 1.7, "url": "http://lists.opensuse.org/opensuse-security-announce/2020-05/msg00047.html" }, { "trust": 1.7, "url": "http://lists.opensuse.org/opensuse-security-announce/2020-06/msg00005.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujul2020.html" }, { "trust": 1.7, "url": "https://lists.debian.org/debian-lts-announce/2020/09/msg00009.html" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-292794.pdf" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956" }, { "trust": 1.4, "url": "https://access.redhat.com/security/cve/cve-2019-19956" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2020-7595" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2019-20388" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.7, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-16935" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935" }, { "trust": 0.6, "url": "https://www.debian.org/lts/security/2019/dla-2048" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/161536/red-hat-security-advisory-2020-5635-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6455281" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3535/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021052216" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.2162/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1727" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1207" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-appliance-is-affected-by-libxml2-vulnerabilities-cve-2019-19956-cve-2019-20388-cve-2020-7595/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/161429/red-hat-security-advisory-2021-0436-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/161727/red-hat-security-advisory-2021-0778-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-4/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0171/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bladecenter-advanced-management-module-amm-is-affected-by-vulnerabilities-in-libxml2/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/159553/red-hat-security-advisory-2020-4255-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.4100/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6520474" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0845" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0025/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0691" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162694/red-hat-security-advisory-2021-2021-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4058" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/160889/red-hat-security-advisory-2021-0050-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162130/red-hat-security-advisory-2021-1129-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/160125/red-hat-security-advisory-2020-5149-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/160961/red-hat-security-advisory-2021-0146-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3868/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1744" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072097" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/158168/red-hat-security-advisory-2020-2646-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021111735" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.2475/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0319/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.0471/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-network-security-is-affected-by-multiple-vulnerabilities-2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0584" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-6/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1193" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libxml2-memory-leak-via-xmlparsebalancedchunkmemoryrecover-31236" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-flex-system-chassis-management-module-cmm-is-affected-by-vulnerabilities-in-libxml2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0864" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.3732" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0986" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-bootable-media-creator-bomc-is-affected-by-vulnerabilities-in-libxml2/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/159349/red-hat-security-advisory-2020-3996-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-6/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2604" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/159851/red-hat-security-advisory-2020-4479-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/156276/ubuntu-security-notice-usn-4274-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1242" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021041514" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/159661/red-hat-security-advisory-2020-4264-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.1826/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3102/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3550" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/161916/red-hat-security-advisory-2021-0949-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162142/red-hat-security-advisory-2021-1079-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-rackswitch-firmware-products-are-affected-by-vulnerabilities-in-libxml2/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-is-affected-by-multiple-vulnerabilities-5/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3631/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3364/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2019-20907" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-8492" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2020-14422" }, { "trust": 0.5, "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-16168" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-9327" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-13630" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20387" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-19221" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-6405" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-13631" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-5018" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-13632" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20218" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20916" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-1971" }, { "trust": 0.3, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-15165" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-14382" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-1751" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-24659" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-1752" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-10029" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17006" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-12749" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17023" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17023" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-6829" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-12652" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12403" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11756" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-11756" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12243" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-14973" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17498" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12749" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17006" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-5094" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-20386" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-17546" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12400" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-11727" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11719" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14973" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12402" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5188" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2017-12652" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12401" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-11719" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-14866" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5094" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11727" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2019-5188" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17498" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14422" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7595" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-19126" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-5482" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-18197" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-12450" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-14822" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14822" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5482" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12450" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19126" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-11068" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8177" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12243" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12400" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-5313" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/401.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:4479" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13632" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/serverless_applications/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3115" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2021" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-6405" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27831" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27832" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:4264" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-2974" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2780" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2974" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2752" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.3/release_notes/ocp-4-3-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14352" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12825" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-18190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2181" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2182" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.3/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-18190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2224" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9283" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-2812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25211" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10726" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10723" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10725" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10723" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10725" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10722" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10722" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10029" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10726" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27813" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5364" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:5633" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-1551" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:0436" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1079" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12402" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20228" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12401" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3447" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-5313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20191" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20180" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12403" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20178" }, { "trust": 0.1, "url": "https://usn.ubuntu.com/4274-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.4+dfsg1-7ubuntu3.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.3+dfsg1-1ubuntu0.7" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/libxml2/2.9.4+dfsg1-6.1ubuntu1.3" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:3996" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1240" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18874" }, { "trust": 0.1, "url": "https://docs.ansible.com/ansible-tower/latest/html/upgrade-migration-guide/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2020:4254" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18874" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14365" } ], "sources": [ { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "160889" }, { "db": "PACKETSTORM", "id": "159661" }, { "db": "PACKETSTORM", "id": "161548" }, { "db": "PACKETSTORM", "id": "161429" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "156276" }, { "db": "PACKETSTORM", "id": "159349" }, { "db": "PACKETSTORM", "id": "159552" }, { "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "db": "NVD", "id": "CVE-2019-19956" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2019-19956" }, { "db": "PACKETSTORM", "id": "162694" }, { "db": "PACKETSTORM", "id": "160889" }, { "db": "PACKETSTORM", "id": "159661" }, { "db": "PACKETSTORM", "id": "161548" }, { "db": "PACKETSTORM", "id": "161429" }, { "db": "PACKETSTORM", "id": "162142" }, { "db": "PACKETSTORM", "id": "156276" }, { "db": "PACKETSTORM", "id": "159349" }, { "db": "PACKETSTORM", "id": "159552" }, { "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "db": "NVD", "id": "CVE-2019-19956" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2019-12-24T00:00:00", "db": "VULMON", "id": "CVE-2019-19956" }, { "date": "2021-05-19T14:19:18", "db": "PACKETSTORM", "id": "162694" }, { "date": "2021-01-11T16:29:48", "db": "PACKETSTORM", "id": "160889" }, { "date": "2020-10-21T15:40:32", "db": "PACKETSTORM", "id": "159661" }, { "date": "2021-02-25T15:30:03", "db": "PACKETSTORM", "id": "161548" }, { "date": "2021-02-16T15:44:48", "db": "PACKETSTORM", "id": "161429" }, { "date": "2021-04-09T15:06:13", "db": "PACKETSTORM", "id": "162142" }, { "date": "2020-02-10T15:35:17", "db": "PACKETSTORM", "id": "156276" }, { "date": "2020-09-30T15:43:22", "db": "PACKETSTORM", "id": "159349" }, { "date": "2020-10-14T16:52:12", "db": "PACKETSTORM", "id": "159552" }, { "date": "2019-12-24T00:00:00", "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "date": "2019-12-24T16:15:00", "db": "NVD", "id": "CVE-2019-19956" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-07-21T00:00:00", "db": "VULMON", "id": "CVE-2019-19956" }, { "date": "2023-06-30T00:00:00", "db": "CNNVD", "id": "CNNVD-201912-1088" }, { "date": "2021-07-21T11:39:00", "db": "NVD", "id": "CVE-2019-19956" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-201912-1088" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libxml2 Security hole", "sources": [ { "db": "CNNVD", "id": "CNNVD-201912-1088" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-201912-1088" } ], "trust": 0.6 } }
var-202208-0404
Vulnerability from variot
zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference). See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- github.com/Masterminds/vcs: Command Injection via argument injection (CVE-2022-21235)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, ppc64le, and aarch64 architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags.
The sha values for the release are
(For x86_64 architecture) The image digest is sha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9
(For s390x architecture) The image digest is sha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8
(For ppc64le architecture) The image digest is sha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7
(For aarch64 architecture) The image digest is sha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift CLI (oc) or web console. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection
- JIRA issues fixed (https://issues.redhat.com/):
OCPBUGS-15446 - (release-4.11) gather "gateway-mode-config" config map from "openshift-network-operator" namespace OCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading 'apiGroup') OCPBUGS-15645 - Can't use git lfs in BuildConfig git source with strategy Docker OCPBUGS-15739 - Environment cannot find Python OCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions OCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get "https://registry.centos.org/v2/": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host OCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.13.0 security and bug fix update Advisory ID: RHSA-2023:3742-02 Product: Red Hat OpenShift Data Foundation Advisory URL: https://access.redhat.com/errata/RHSA-2023:3742 Issue date: 2023-06-21 CVE Names: CVE-2015-20107 CVE-2018-25032 CVE-2020-10735 CVE-2020-16250 CVE-2020-16251 CVE-2020-17049 CVE-2021-3765 CVE-2021-3807 CVE-2021-4231 CVE-2021-4235 CVE-2021-4238 CVE-2021-28861 CVE-2021-43519 CVE-2021-43998 CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2021-44964 CVE-2021-46828 CVE-2021-46848 CVE-2022-0670 CVE-2022-1271 CVE-2022-1304 CVE-2022-1348 CVE-2022-1586 CVE-2022-1587 CVE-2022-2309 CVE-2022-2509 CVE-2022-2795 CVE-2022-2879 CVE-2022-2880 CVE-2022-3094 CVE-2022-3358 CVE-2022-3515 CVE-2022-3517 CVE-2022-3715 CVE-2022-3736 CVE-2022-3821 CVE-2022-3924 CVE-2022-4415 CVE-2022-21824 CVE-2022-23540 CVE-2022-23541 CVE-2022-24903 CVE-2022-26280 CVE-2022-27664 CVE-2022-28805 CVE-2022-29154 CVE-2022-30635 CVE-2022-31129 CVE-2022-32189 CVE-2022-32190 CVE-2022-33099 CVE-2022-34903 CVE-2022-35737 CVE-2022-36227 CVE-2022-37434 CVE-2022-38149 CVE-2022-38900 CVE-2022-40023 CVE-2022-40303 CVE-2022-40304 CVE-2022-40897 CVE-2022-41316 CVE-2022-41715 CVE-2022-41717 CVE-2022-41723 CVE-2022-41724 CVE-2022-41725 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-42919 CVE-2022-43680 CVE-2022-45061 CVE-2022-45873 CVE-2022-46175 CVE-2022-47024 CVE-2022-47629 CVE-2022-48303 CVE-2022-48337 CVE-2022-48338 CVE-2022-48339 CVE-2023-0361 CVE-2023-0620 CVE-2023-0665 CVE-2023-2491 CVE-2023-22809 CVE-2023-24329 CVE-2023-24999 CVE-2023-25000 CVE-2023-25136 =====================================================================
- Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available in Red Hat Container Registry for Red Hat OpenShift Data Foundation 4.13.0 on Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be (CVE-2021-4238)
-
decode-uri-component: improper input validation resulting in DoS (CVE-2022-38900)
-
vault: Hashicorp Vault AWS IAM Integration Authentication Bypass (CVE-2020-16250)
-
vault: GCP Auth Method Allows Authentication Bypass (CVE-2020-16251)
-
nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes (CVE-2021-3807)
-
go-yaml: Denial of Service in go-yaml (CVE-2021-4235)
-
vault: incorrect policy enforcement (CVE-2021-43998)
-
nodejs: Improper handling of URI Subject Alternative Names (CVE-2021-44531)
-
nodejs: Certificate Verification Bypass via String Injection (CVE-2021-44532)
-
nodejs: Incorrect handling of certificate subject and issuer fields (CVE-2021-44533)
-
golang: archive/tar: unbounded memory consumption when reading headers (CVE-2022-2879)
-
golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters (CVE-2022-2880)
-
nodejs-minimatch: ReDoS via the braceExpand function (CVE-2022-3517)
-
jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass (CVE-2022-23540)
-
jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC (CVE-2022-23541)
-
golang: net/http: handle server errors after sending GOAWAY (CVE-2022-27664)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/url: JoinPath does not strip relative path components in all circumstances (CVE-2022-32190)
-
consul: Consul Template May Expose Vault Secrets When Processing Invalid Input (CVE-2022-38149)
-
vault: insufficient certificate revocation list checking (CVE-2022-41316)
-
golang: regexp/syntax: limit memory used by parsing regexps (CVE-2022-41715)
-
golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)
-
net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding (CVE-2022-41723)
-
golang: crypto/tls: large handshake records may cause panics (CVE-2022-41724)
-
golang: net/http, mime/multipart: denial of service from excessive resource consumption (CVE-2022-41725)
-
json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)
-
vault: Vault’s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File (CVE-2023-0620)
-
hashicorp/vault: Vault’s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata (CVE-2023-0665)
-
Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation (CVE-2023-24999)
-
hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations (CVE-2023-25000)
-
validator: Inefficient Regular Expression Complexity in Validator.js (CVE-2021-3765)
-
nodejs: Prototype pollution via console.table properties (CVE-2022-21824)
-
golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service (CVE-2022-32189)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images that provide numerous bug fixes and enhancements.
- Bugs fixed (https://bugzilla.redhat.com/):
1786696 - UI->Dashboards->Overview->Alerts shows MON components are at different versions, though they are NOT 1855339 - Wrong version of ocs-storagecluster 1943137 - [Tracker for BZ #1945618] rbd: Storage is not reclaimed after persistentvolumeclaim and job that utilized it are deleted 1944687 - [RFE] KMS server connection lost alert 1989088 - [4.8][Multus] UX experience issues and enhancements 2005040 - Uninstallation of ODF StorageSystem via OCP Console fails, gets stuck in Terminating state 2005830 - [DR] DRPolicy resource should not be editable after creation 2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2028193 - CVE-2021-43998 vault: incorrect policy enforcement 2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names 2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection 2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields 2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties 2042914 - [Tracker for BZ #2013109] [UI] Refreshing web console from the pop-up is taking to Install Operator page. 2052252 - CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2022-21824 [CVE] nodejs: various flaws [openshift-data-foundation-4] 2101497 - ceph_mon_metadata metrics are not collected properly 2101916 - must-gather is not collecting ceph logs or coredumps 2102304 - [GSS] Remove the entry of removed node from Storagecluster under Node Topology 2104148 - route ocs-storagecluster-cephobjectstore misconfigured to use http and https on same http route in haproxy.config 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2115020 - [RDR] Sync schedule is not removed from mirrorpeer yaml after DR Policy is deleted 2115616 - [GSS] failing to change ownership of the NFS based PVC for PostgreSQL pod by using kube_pv_chown utility 2119551 - CVE-2022-38149 consul: Consul Template May Expose Vault Secrets When Processing Invalid Input 2120098 - [RDR] Even before an action gets fully completed, PeerReady and Available are reported as True in the DRPC yaml 2120944 - Large Omap objects found in pool 'ocs-storagecluster-cephfilesystem-metadata' 2124668 - CVE-2022-32190 golang: net/url: JoinPath does not strip relative path components in all circumstances 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2126299 - CVE-2021-3765 validator: Inefficient Regular Expression Complexity in Validator.js 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking 2139037 - [cee/sd]Unable to access s3 via RGW route ocs-storagecluster-cephobjectstore 2141095 - [RDR] Storage System page on ACM Hub is visible even when data observability is not enabled 2142651 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters 2142894 - Credentials are ignored when creating a Backing/Namespace store after prompted to enter a name for the resource 2142941 - RGW cloud Transition. HEAD/GET requests to MCG are failing with 403 error 2143944 - [GSS] unknown parameter name "FORCE_OSD_REMOVAL" 2144256 - [RDR] [UI] DR Application applied to a single DRPolicy starts showing connected to multiple policies due to console flickering 2151903 - [MCG] Azure bs/ns creation fails with target bucket does not exists 2152143 - [Noobaa Clone] Secrets are used in env variables 2154250 - NooBaa Bucket Quota alerts are not working 2155507 - RBD reclaimspace job fails when the PVC is not mounted 2155743 - ODF Dashboard fails to load 2156067 - [RDR] [UI] When Peer Ready isn't True, UI doesn't reset the error message even when no subscription group is selected 2156069 - [UI] Instances of OCS can be seen on BlockPool action modals 2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method 2156519 - 4.13: odf-csi-addons-operator failed with OwnNamespace InstallModeType not supported 2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml 2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2157876 - [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn't appear after ODF upgrade resulting in dashboard crash 2158922 - Namespace store fails to get created via the ODF UI 2159676 - rbd-mirror logs are rotated very frequently, increase the default maxlogsize for rbd-mirror 2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests 2161879 - logging issue when deleting webhook resources 2161937 - collect kernel and journal logs from all worker nodes 2162257 - [RDR][CEPHFS] sync/replication is getting stopped for some pvc 2164617 - Unable to expand ocs-storagecluster-ceph-rbd PVCs provisioned in Filesystem mode 2165495 - Placement scheduler is using too much resources 2165504 - Sizer sharing link is broken 2165929 - [RFE] ODF bluewash introduction in 4.12.x 2165938 - ocs-operator CSV is missing disconnected env annotation. 2165984 - [RDR] Replication stopped for images is represented with incorrect color 2166222 - CSV is missing disconnected env annotation and relatedImages spec 2166234 - Application user unable to invoke Failover and Relocate actions 2166869 - Match the version of consoleplugin to odf operator 2167299 - [RFE] ODF bluewash introduction in 4.12.x 2167308 - [mcg-clone] Security and VA issues with ODF operator 2167337 - CVE-2020-16250 vault: Hashicorp Vault AWS IAM Integration Authentication Bypass 2167340 - CVE-2020-16251 vault: GCP Auth Method Allows Authentication Bypass 2167946 - CSV is missing disconnected env annotation and relatedImages spec 2168113 - [Ceph Tracker BZ #2141110] [cee/sd][Bluestore] Newly deployed bluestore OSD's showing high fragmentation score 2168635 - fix redirect link to operator details page (OCS dashboard) 2168840 - [Fusion-aaS][ODF 4.13]Within 'prometheus-ceph-rules' the namespace for 'rook-ceph-mgr' jobs should be configurable. 2168849 - Must-gather doesn't collect coredump logs crucial for OSD crash events 2169375 - CVE-2022-23541 jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC 2169378 - CVE-2022-23540 jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass 2169779 - [vSphere]: rook-ceph-mon- pvc are in pending state 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2170673 - [RDR] Different replication states of PVC images aren't correctly distinguished and representated on UI 2172089 - [Tracker for Ceph BZ 2174461] rook-ceph-nfs pod is stuck at status 'CreateContainerError' after enabling NFS in ODF 4.13 2172365 - [csi-addons] odf-csi-addons-operator oomkilled with fresh installation 4.12 2172521 - No OSD pods are created for 4.13 LSO deployment 2173161 - ODF-console can not start when you disable IPv6 on Node with kernel parameter. 2173528 - Creation of OCS operator tag automatically for verified commits 2173534 - When on StorageSystem details click on History back btn it shows blank body 2173926 - [RFE] Include changes in MCG for new Ceph RGW transition headers 2175612 - noobaa-core-0 crashing and storagecluster not getting to ready state during ODF deployment with FIPS enabled in 4.13cluster 2175685 - RGW OBC creation via the UI is blocked by "Address form errors to proceed" error 2175714 - UI fix- capitalization 2175867 - Rook sets cephfs kernel mount options even when mon is using v1 port 2176080 - odf must-gather should collect output of oc get hpa -n openshift-storage 2176456 - [RDR] ramen-hub-operator and ramen-dr-cluster-operator is going into CLBO post deployment 2176739 - [UI] CSI Addons operator icon is broken 2176776 - Enable save options only when the protected apps has labels for manage DRPolicy 2176798 - [IBM Z ] Multi Cluster Orchestrator operator is not available in the Operator Hub 2176809 - [IBM Z ] DR operator is not available in the Operator Hub 2177134 - Next button if disabled for storage system deployment flow for IBM Ceph Storage security and network step when there is no OCS installed already 2177221 - Enable DR dashboard only when ACM observability is enabled 2177325 - Noobaa-db pod is taking longer time to start up in ODF 4.13 2177695 - DR dashbaord showing incorrect RPO data 2177844 - CVE-2023-24999 Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation 2178033 - node topology warnings tab doesn't show pod warnings 2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding 2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption 2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics 2178588 - No rack names on ODF Topology 2178619 - odf-operator failing to resolve its sub-dependencies leaving the ocs-consumer/provider addon in a failed and halted state 2178682 - [GSS] Add the valid AWS GovCloud regions in OCS UI. 2179133 - [UI] A blank page appears while selecting Storage Pool for creating Encrypted Storage Class 2179337 - Invalid storage system href link on the ODF multicluster dashboard 2179403 - (4.13) Mons are failing to start when msgr2 is required with RHCS 6.1 2179846 - [IBM Z] In RHCS external mode Cephobjectstore creation fails as it reports that the "object store name cannot be longer than 38 characters" 2179860 - [MCG] Bucket replication with deletion sync isn't complete 2179976 - [ODF 4.13] Missing the status-reporter binary causing pods "report-status-to-provider" remain in CreateContainerError on ODF to ODF cluster on ROSA 2179981 - ODF Topology search bar mistakes to find searched node/pod 2179997 - Topology. Exit full screen does not appear in Full screen mode 2180211 - StorageCluster stuck in progressing state for Thales KMS deployment 2180397 - Last sync time is missing on application set's disaster recovery status popover 2180440 - odf-monitoring-tool. YAML file misjudged as corrupted 2180921 - Deployment with external cluster in ODF 4.13 with unable to use cephfs as backing store for image_registry 2181112 - [RDR] [UI] Hide disable DR functionality as it would be un-tested in 4.13 2181133 - CI: backport E2E job improvements 2181446 - [KMS][UI] PVC provisioning failed in case of vault kubernetes authentication is configured. 2181535 - [GSS] Object storage in degraded state 2181551 - Build: move to 'dependencies' the ones required for running a build 2181832 - Create OBC via UI, placeholder on StorageClass dropped 2181949 - [ODF Tracker] [RFE] Catch MDS damage to the dentry's first snapid 2182041 - OCS-Operator expects NooBaa CRDs to be present on the cluster when installed directly without ODF Operator 2182296 - [Fusion-aaS][ODF 4.13]must-gather does not collect relevant logs when storage cluster is not in openshift-storage namespace 2182375 - [MDR] Not able to fence DR clusters 2182644 - [IBM Z] MDR policy creation fails unless the ocs-operator pod is restarted on the managed clusters 2182664 - Topology view should hide the sidebar when changing levels 2182703 - [RDR] After upgrading from 4.12.2 to 4.13.0 version.odf.openshift.io cr is not getting updated with latest ODF version 2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations 2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata 2183155 - failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call 2183196 - [Fusion-aaS] Collect Must-gather logs from the managed-fusion agent namesapce 2183266 - [Fusion aaS Rook ODF 4.13]] Rook-ceph-operator pod should allow OBC CRDs to be optional instead of causing a crash when not present 2183457 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1] 2183478 - [MDR][UI] Cannot relocate subscription based apps, Appset based apps are possible to relocate 2183520 - [Fusion-aaS] csi-cephfs-plugin pods are not created after installing ocs-client-operator 2184068 - [Fusion-aaS] Failed to mount CephFS volumes while creating pods 2184605 - [ODF 4.13][Fusion-aaS] OpenShift Data Foundation Client operator is listed in OperatorHub and installable from UI 2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File 2184769 - {Fusion-aaS][ODF 4.13]Remove storageclassclaim cr and create new cr storageclass request cr 2184773 - multicluster-orchestrator should not reset spec.network.multiClusterService.Enabled field added by user 2184892 - Don't pass encryption options to ceph cluster in odf external mode to provider/consumer cluster 2184984 - Topology Sidebar alerts panel: alerts accordion does not toggle when clicking on alert severity text 2185164 - [KMS][VAULT] PVC provisioning is failing when the Vault (HCP) Kubernetes authentication is set. 2185188 - Fix storagecluster watch request for OCSInitialization 2185757 - add NFS dashboard 2185871 - [MDR][ACM-Tracker] Deleting an Appset based application does not delete its placement 2186171 - [GSS] "disableLoadBalancerService: true" config is reconciled after modifying the number of NooBaa endpoints 2186225 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1] 2186475 - handle different network connection spec & Pass appropriate options for all the cases of Network Spec 2186752 - [translations] add translations for 4.13 2187251 - sync ocs and odf with the latest rook 2187296 - [MCG] Can't opt out of deletions sync once log-based replication with deletions sync is set 2187736 - [RDR] Replication history graph is showing incorrect value 2187952 - When cluster controller is cancelled frequently, multiple simultaneous controllers cause issues since need to wait for shutdown before continuing new controller 2187969 - [ODFMS-Migration ] [OCS Client Operator] csi-rbdplugin stuck in ImagePullBackOff on consumer clusters after Migration 2187986 - [MDR] ramen-dr-cluster-operator pod is in CLBO after assigning dr policy to an appset based app 2188053 - ocs-metrics-exporter cannot list/watch StorageCluster, StorageClass, CephBlockPool and other resources 2188238 - [RDR] Avoid using the terminologies "SLA" in DR dashbaord 2188303 - [RDR] Maintenance mode is not enabled after initiating failover action 2188427 - [External mode upgrade]: Upgrade from 4.12 -> 4.13 external mode is failing because rook-ceph-operator is not reaching clean state 2188666 - wrong label in new storageclassrequest cr 2189483 - After upgrade noobaa-db-pg-0 pod using old image in one of container 2189929 - [RDR/MDR] [UI] Dashboard fon size are very uneven 2189982 - [RDR] ocs_rbd_client_blocklisted datapoints and the corresponding alert is not getting generated 2189984 - [KMS][VAULT] Storage cluster remains in 'Progressing' state during deployment with storage class encryption, despite all pods being up and running. 2190129 - OCS Provider Server logs are incorrect 2190241 - nfs metric details are unavailable and server health is displaying as "Degraded" under Network file system tab in UI 2192088 - [IBM P] rbd_default_map_options value not set to ms_mode=secure in in-transit encryption enabled ODF cluster 2192670 - Details tab for nodes inside Topology throws "Something went wrong" on IBM Power platform 2192824 - [4.13] Fix Multisite in external cluster 2192875 - Enable ceph-exporter in rook 2193114 - MCG replication is failing due to OC binary incompatible on Power platform 2193220 - [Stretch cluster] CephCluster is updated frequently due to changing ordering of zones 2196176 - MULTUS UI, There is no option to change the multus configuration after we configure the params 2196236 - [RDR] With ACM 2.8 User is not able to apply Drpolicy to subscription workload 2196298 - [RDR] DRPolicy doesn't show connected application when subscription based workloads are deployed via CLI 2203795 - ODF Monitoring is missing some of the ceph_ metric values 2208029 - nfs server health is always displaying as "Degraded" under Network file system tab in UI. 2208079 - rbd mirror daemon is commonly not upgraded 2208269 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep 2208558 - [MDR] ramen-dr-cluster-operator pod crashes during failover 2208962 - [UI] ODF Topology. Degraded cluster don't show red canvas on cluster level 2209364 - ODF dashboard crashes when OCP and ODF are upgraded 2209643 - Multus, Cephobjectstore stuck on Progressing state because " failed to create or retrieve rgw admin ops user" 2209695 - When collecting Must-gather logs shows /usr/bin/gather_ceph_resources: line 341: jq: command not found 2210964 - [UI][MDR] After hub recovery in overview tab of data policies Application set apps count is not showing 2211334 - The replication history graph is very unclear 2211343 - [MCG-Only]: upgrade failed from 4.12 to 4.13 due to missing CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config 2211704 - Multipart uploads fail to a Azure namespace bucket when user MD is sent as part of the upload
- References:
https://access.redhat.com/security/cve/CVE-2015-20107 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2020-10735 https://access.redhat.com/security/cve/CVE-2020-16250 https://access.redhat.com/security/cve/CVE-2020-16251 https://access.redhat.com/security/cve/CVE-2020-17049 https://access.redhat.com/security/cve/CVE-2021-3765 https://access.redhat.com/security/cve/CVE-2021-3807 https://access.redhat.com/security/cve/CVE-2021-4231 https://access.redhat.com/security/cve/CVE-2021-4235 https://access.redhat.com/security/cve/CVE-2021-4238 https://access.redhat.com/security/cve/CVE-2021-28861 https://access.redhat.com/security/cve/CVE-2021-43519 https://access.redhat.com/security/cve/CVE-2021-43998 https://access.redhat.com/security/cve/CVE-2021-44531 https://access.redhat.com/security/cve/CVE-2021-44532 https://access.redhat.com/security/cve/CVE-2021-44533 https://access.redhat.com/security/cve/CVE-2021-44964 https://access.redhat.com/security/cve/CVE-2021-46828 https://access.redhat.com/security/cve/CVE-2021-46848 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1304 https://access.redhat.com/security/cve/CVE-2022-1348 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1587 https://access.redhat.com/security/cve/CVE-2022-2309 https://access.redhat.com/security/cve/CVE-2022-2509 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/cve/CVE-2022-2879 https://access.redhat.com/security/cve/CVE-2022-2880 https://access.redhat.com/security/cve/CVE-2022-3094 https://access.redhat.com/security/cve/CVE-2022-3358 https://access.redhat.com/security/cve/CVE-2022-3515 https://access.redhat.com/security/cve/CVE-2022-3517 https://access.redhat.com/security/cve/CVE-2022-3715 https://access.redhat.com/security/cve/CVE-2022-3736 https://access.redhat.com/security/cve/CVE-2022-3821 https://access.redhat.com/security/cve/CVE-2022-3924 https://access.redhat.com/security/cve/CVE-2022-4415 https://access.redhat.com/security/cve/CVE-2022-21824 https://access.redhat.com/security/cve/CVE-2022-23540 https://access.redhat.com/security/cve/CVE-2022-23541 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-26280 https://access.redhat.com/security/cve/CVE-2022-27664 https://access.redhat.com/security/cve/CVE-2022-28805 https://access.redhat.com/security/cve/CVE-2022-29154 https://access.redhat.com/security/cve/CVE-2022-30635 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/cve/CVE-2022-32189 https://access.redhat.com/security/cve/CVE-2022-32190 https://access.redhat.com/security/cve/CVE-2022-33099 https://access.redhat.com/security/cve/CVE-2022-34903 https://access.redhat.com/security/cve/CVE-2022-35737 https://access.redhat.com/security/cve/CVE-2022-36227 https://access.redhat.com/security/cve/CVE-2022-37434 https://access.redhat.com/security/cve/CVE-2022-38149 https://access.redhat.com/security/cve/CVE-2022-38900 https://access.redhat.com/security/cve/CVE-2022-40023 https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/cve/CVE-2022-40897 https://access.redhat.com/security/cve/CVE-2022-41316 https://access.redhat.com/security/cve/CVE-2022-41715 https://access.redhat.com/security/cve/CVE-2022-41717 https://access.redhat.com/security/cve/CVE-2022-41723 https://access.redhat.com/security/cve/CVE-2022-41724 https://access.redhat.com/security/cve/CVE-2022-41725 https://access.redhat.com/security/cve/CVE-2022-42010 https://access.redhat.com/security/cve/CVE-2022-42011 https://access.redhat.com/security/cve/CVE-2022-42012 https://access.redhat.com/security/cve/CVE-2022-42898 https://access.redhat.com/security/cve/CVE-2022-42919 https://access.redhat.com/security/cve/CVE-2022-43680 https://access.redhat.com/security/cve/CVE-2022-45061 https://access.redhat.com/security/cve/CVE-2022-45873 https://access.redhat.com/security/cve/CVE-2022-46175 https://access.redhat.com/security/cve/CVE-2022-47024 https://access.redhat.com/security/cve/CVE-2022-47629 https://access.redhat.com/security/cve/CVE-2022-48303 https://access.redhat.com/security/cve/CVE-2022-48337 https://access.redhat.com/security/cve/CVE-2022-48338 https://access.redhat.com/security/cve/CVE-2022-48339 https://access.redhat.com/security/cve/CVE-2023-0361 https://access.redhat.com/security/cve/CVE-2023-0620 https://access.redhat.com/security/cve/CVE-2023-0665 https://access.redhat.com/security/cve/CVE-2023-2491 https://access.redhat.com/security/cve/CVE-2023-22809 https://access.redhat.com/security/cve/CVE-2023-24329 https://access.redhat.com/security/cve/CVE-2023-24999 https://access.redhat.com/security/cve/CVE-2023-25000 https://access.redhat.com/security/cve/CVE-2023-25136 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBZJTCdtzjgjWX9erEAQg+Bw/8DMJst89ezTMnzgSKR5q+EzfkajgA1+hZ pk9CcsCzrIISkbi+6uvkfRPe7hwHstigfswCsuh4d98lad20WKw9UUYMsFOQlGW5 Izzxf5a1Uw/pdO/61f4k6Ze7E4gANneknQiiiUFpA4lF7RkuBoeWYoB12r+Y3O/t l8CGEVAk/DBn2WVc5PL7o7683A6tS8Z5FNpyPg2tvtpdYkr1cw2+L2mcBHpiAjUr S+Jaj5/qf8Z/TIZY7vvOqr6YCDrMnbZChbvYaPCwaRqbOb1RbGW++c9hEWKnaNbm XiIgTY4d75+y7afRFoc9INZ1SjvL7476LCABGXmEEocuwHRU7K4u4rGyOXzDz5xb 3zgJO58oVr6RPHvpDsxoqOwEbhfdNpRpBcuuzAThe9w5Cnh45UnEU5sJKY/1U1qo UxBeMoFrrhUdrE4A1Gsr0GcImh6JDJXweIJe1C6FI9e3/J5HM7mR4Whznz+DslXL CNmmPWs5afjrrgVVaDuDYq3m7lwuCTODHRVSeWGrtyhnNc6RNtjJi9fumqavP07n 8lc4v4c56lMVDpwQQkYMCJEzHrYDWeFDza9KdDbddvLtkoYXxJQiGwp0BZne1ArV lU3PstRRagnbV6yf/8LPSaSQZAVBnEe2YoF83gJbpFEhYimOCHS9BzC0qce7lypR vhbUlNurVkU= =4jwh -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays
- JIRA issues fixed (https://issues.jboss.org/):
LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node
- Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
Security fixes:
- CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY
- CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters
- CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps
- CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
Bugs addressed:
- subctl diagnose firewall metrics does not work on merged kubeconfig (BZ# 2013711)
- [Submariner] - Fails to increase gateway amount after deployment (BZ# 2097381)
- Submariner gateway node does not get deleted with subctl cloud cleanup command (BZ# 2108634)
- submariner GW pods are unable to resolve the DNS of the Broker K8s API URL (BZ# 2119362)
- Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack (BZ# 2124219)
- unable to run subctl benchmark latency, pods fail with ImagePullBackOff (BZ# 2130326)
- [IBM Z] - Submariner addon unistallation doesnt work from ACM console (BZ# 2136442)
- Tags on AWS security group for gateway node break cloud-controller LoadBalancer (BZ# 2139477)
- RHACM - Submariner: UI support for OpenStack #19297 (ACM-1242)
- Submariner OVN support (ACM-1358)
- Submariner Azure Console support (ACM-1388)
- ManagedClusterSet consumers migrate to v1beta2 (ACM-1614)
- Submariner on disconnected ACM #22000 (ACM-1678)
- Submariner gateway: Error creating AWS security group if already exists (ACM-2055)
- Submariner gateway security group in AWS not deleted when uninstalling submariner (ACM-2057)
- The submariner-metrics-proxy pod pulls an image with wrong naming convention (ACM-2058)
- The submariner-metrics-proxy pod is not part of the Agent readiness check (ACM-2067)
- Subctl 0.14.0 prints version "vsubctl" (ACM-2132)
- managedclusters "local-cluster" not found and missing Submariner Broker CRD (ACM-2145)
- Add support of ARO to Submariner deployment (ACM-2150)
- The e2e tests execution fails for "Basic TCP connectivity" tests (ACM-2204)
- Gateway error shown "diagnose all" tests (ACM-2206)
- Submariner does not support cluster "kube-proxy ipvs mode"(ACM-2211)
- Vsphere cluster shows Pod Security admission controller warnings (ACM-2256)
- Cannot use submariner with OSP and self signed certs (ACM-2274)
- Subctl diagnose tests spawn nettest image with wrong tag nameing convention (ACM-2387)
-
Subctl 0.14.1 prints version "devel" (ACM-2482)
-
Bugs fixed (https://bugzilla.redhat.com/):
2013711 - subctl diagnose firewall metrics does not work on merged kubeconfig 2097381 - [Submariner] - Fails to increase gateway amount after deployment 2108634 - Submariner gateway node does not get deleted with subctl cloud cleanup command 2119362 - submariner GW pods are unable to resolve the DNS of the Broker K8s API URL 2124219 - Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2130326 - unable to run subctl benchmark latency, pods fail with ImagePullBackOff 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2136442 - [IBM Z] - Submariner addon unistallation doesnt work from ACM console 2139477 - Tags on AWS security group for gateway node break cloud-controller LoadBalancer 2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests
- JIRA issues fixed (https://issues.jboss.org/):
ACM-1614 - ManagedClusterSet consumers migrate to v1beta2 (Submariner) ACM-2055 - Submariner gateway: Error creating AWS security group if already exists ACM-2057 - [Submariner] - submariner gateway security group in aws not deleted when uninstalling submariner ACM-2058 - [Submariner] - The submariner-metrics-proxy pod pulls an image with wrong naming convention ACM-2067 - [Submariner] - The submariner-metrics-proxy pod is not part of the Agent readiness check ACM-2132 - Subctl 0.14.0 prints version "vsubctl" ACM-2145 - managedclusters "local-cluster" not found and missing Submariner Broker CRD ACM-2150 - Add support of ARO to Submariner deployment ACM-2204 - [Submariner] - e2e tests execution fails for "Basic TCP connectivity" tests ACM-2206 - [Submariner] - Gateway error shown "diagnose all" tests ACM-2211 - [Submariner] - Submariner does not support cluster "kube-proxy ipvs mode" ACM-2256 - [Submariner] - Vsphere cluster shows Pod Security admission controller warnings ACM-2274 - Cannot use submariner with OSP and self signed certs ACM-2387 - [Submariner] - subctl diagnose tests spawn nettest image with wrong tag nameing convention ACM-2482 - Subctl 0.14.1 prints version "devel"
- This advisory contains the following OpenShift Virtualization 4.12.0 images:
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)
-
golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)
-
golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
-
golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)
-
golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)
-
golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)
-
golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)
-
golang: crypto/tls: session tickets lack random ticket_age_add (CVE-2022-30629)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
RHEL-8-CNV-4.12
============= bridge-marker-container-v4.12.0-24 cluster-network-addons-operator-container-v4.12.0-24 cnv-containernetworking-plugins-container-v4.12.0-24 cnv-must-gather-container-v4.12.0-58 hco-bundle-registry-container-v4.12.0-769 hostpath-csi-driver-container-v4.12.0-30 hostpath-provisioner-container-v4.12.0-30 hostpath-provisioner-operator-container-v4.12.0-31 hyperconverged-cluster-operator-container-v4.12.0-96 hyperconverged-cluster-webhook-container-v4.12.0-96 kubemacpool-container-v4.12.0-24 kubevirt-console-plugin-container-v4.12.0-182 kubevirt-ssp-operator-container-v4.12.0-64 kubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55 kubevirt-tekton-tasks-copy-template-container-v4.12.0-55 kubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55 kubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55 kubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55 kubevirt-tekton-tasks-operator-container-v4.12.0-40 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55 kubevirt-template-validator-container-v4.12.0-32 libguestfs-tools-container-v4.12.0-255 ovs-cni-marker-container-v4.12.0-24 ovs-cni-plugin-container-v4.12.0-24 virt-api-container-v4.12.0-255 virt-artifacts-server-container-v4.12.0-255 virt-cdi-apiserver-container-v4.12.0-72 virt-cdi-cloner-container-v4.12.0-72 virt-cdi-controller-container-v4.12.0-72 virt-cdi-importer-container-v4.12.0-72 virt-cdi-operator-container-v4.12.0-72 virt-cdi-uploadproxy-container-v4.12.0-71 virt-cdi-uploadserver-container-v4.12.0-72 virt-controller-container-v4.12.0-255 virt-exportproxy-container-v4.12.0-255 virt-exportserver-container-v4.12.0-255 virt-handler-container-v4.12.0-255 virt-launcher-container-v4.12.0-255 virt-operator-container-v4.12.0-255 virtio-win-container-v4.12.0-10 vm-network-latency-checkup-container-v4.12.0-89
- Solution:
Before applying this update, you must apply all previously released errata relevant to your system.
To apply this update, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1719190 - Unable to cancel live-migration if virt-launcher pod in pending state
2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2040377 - Unable to delete failed VMIM after VM deleted
2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed
2052556 - Metric "kubevirt_num_virt_handlers_by_node_running_virt_launcher" reporting incorrect value
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2060499 - [RFE] Cannot add additional service (or other objects) to VM template
2069098 - Large scale |VMs migration is slow due to low migration parallelism
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2071491 - Storage Throughput metrics are incorrect in Overview
2072797 - Metrics in Virtualization -> Overview period is not clear or configurable
2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers
2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode
2086551 - Min CPU feature found in labels
2087724 - Default template show no boot source even there are auto-upload boot sources
2088129 - [SSP] webhook does not comply with restricted security context
2088464 - [CDI] cdi-deployment does not comply with restricted security context
2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR
2089744 - HCO should label its control plane namespace to admit pods at privileged security level
2089751 - 4.12.0 containers
2089804 - 4.12.0 rpms
2091856 - ?Edit BootSource? action should have more explicit information when disabled
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer
2093771 - The disk source should be PVC if the template has no auto-update boot source
2093996 - kubectl get vmi API should always return primary interface if exist
2094202 - Cloud-init username field should have hint
2096285 - KubeVirt CR API documentation is missing docs for many fields
2096780 - [RFE] Add ssh-key and sysprep to template scripts tab
2097436 - Online disk expansion ignores filesystem overhead change
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2099556 - [RFE] Add option to enable RDP service for windows vm
2099573 - [RFE] Improve template's message about not editable
2099923 - [RFE] Merge "SSH access" and "SSH command" into one
2100290 - Error is not dismissed on catalog review page
2100436 - VM list filtering ignores VMs in error-states
2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2100629 - Update nested support KBASE article
2100679 - The number of hardware devices is not correct in vm overview tab
2100682 - All hardware devices get deleted while just delete one
2100684 - Workload profile are not editable during creation and after creation
2101144 - VM filter has two "Other" checkboxes which are triggered together
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101167 - Edit buttons clickable area is too large.
2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state
2101390 - Easy to miss the "tick" when adding GPU device to vm via UI
2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2101423 - wrong user name on using ignition
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101445 - "Pending changes - Boot Order"
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101499 - Cannot add NIC to VM template as non-priv user
2101501 - NAME parameter in VM template has no effect.
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101667 - VMI view is not aligned with vm and tempates
2101681 - All templates are labeling "source available" in template list page
2102074 - VM Creation time on VM Overview Details card lacks string
2102125 - vm clone modal is displaying DV size instead of PVC size
2102132 - align the utilization card of single VM overview with the design
2102138 - Should the word "new" be removed from "Create new VirtualMachine from catalog"?
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102475 - Template 'vm-template-example' should be filtered by 'Fedora' rather than 'Other'
2102561 - sysprep-info should link to downstream doc
2102737 - Clone a VM should lead to vm overview tab
2102740 - "Save" button on vm clone modal should be "Clone"
2103806 - "404: Not Found" appears shortly by clicking the PVC link on vm disk tab
2103807 - PVC is not named by VM name while creating vm quickly
2103817 - Workload profile values in vm details should align with template's value
2103844 - VM nic model is empty
2104331 - VM list page scroll up automatically
2104402 - VM create button is not enabled while adding multiple environment disks
2104422 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2104424 - Enable descheduler or hide it on template's scheduling tab
2104479 - [4.12] Cloned VM's snapshot restore fails if the source VM disk is deleted
2104480 - Alerts in VM overview tab disappeared after a few seconds
2104785 - "Add disk" and "Disks" are on the same line
2104859 - [RFE] Add "Copy SSH command" to VM action list
2105257 - Can't set log verbosity level for virt-operator pod
2106175 - All pages are crashed after visit Virtualization -> Overview
2106963 - Cannot add configmap for windows VM
2107279 - VM Template's bootable disk can be marked as bootable
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse functions
2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
2108339 - datasource does not provide timestamp when updated
2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed
2109818 - Upstream metrics documentation is not detailed enough
2109975 - DataVolume fails to import "cirros-container-disk-demo" image
2110256 - Storage -> PVC -> upload data, does not support source reference
2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls
2111240 - GiB changes to B in Template's Edit boot source reference modal
2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111328 - kubevirt plugin console crashed after visit vmi page
2111378 - VM SSH command generated by UI points at api VIP
2111744 - Cloned template should not label app.kubernetes.io/name: common-templates
2111794 - the virtlogd process is taking too much RAM! (17468Ki > 17Mi)
2112900 - button style are different
2114516 - Nothing happens after clicking on Fedora cloud image list link
2114636 - The style of displayed items are not unified on VM tabs
2114683 - VM overview tab is crashed just after the vm is created
2115257 - Need to Change system-product-name to "OpenShift Virtualization" in CNV-4.12
2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass
2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items
2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates
2116225 - The filter keyword of the related operator 'Openshift Data Foundation' is 'OCS' rather than 'ODF'
2116644 - Importer pod is failing to start with error "MountVolume.SetUp failed for volume "cdi-proxy-cert-vol" : configmap "custom-ca" not found"
2117549 - Cannot edit cloud-init data after add ssh key
2117803 - Cannot edit ssh even vm is stopped
2117813 - Improve descriptive text of VM details while VM is off
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
2118257 - outdated doc link tolerations modal
2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format
2119069 - Unable to start windows VMs on PSI setups
2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2119309 - readinessProbe in VM stays on failed
2119615 - Change the disk size causes the unit changed
2120907 - Cannot filter disks by label
2121320 - Negative values in migration metrics
2122236 - Failing to delete HCO with SSP sticking around
2122990 - VMExport should check APIGroup
2124147 - "ReadOnlyMany" should not be added to supported values in memory dump
2124307 - Ui crash/stuck on loading when trying to detach disk on a VM
2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it
2124555 - View documentation link on MigrationPolicies page des not work
2124557 - MigrationPolicy description is not displayed on Details page
2124558 - Non-privileged user can start MigrationPolicy creation
2124565 - Deleted DataSource reappears in list
2124572 - First annotation can not be added to DataSource
2124582 - Filtering VMs by OS does not work
2124594 - Docker URL validation is inconsistent over application
2124597 - Wrong case in Create DataSource menu
2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile
2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state
2127787 - Expose the PVC source of the dataSource on UI
2127843 - UI crashed by selecting "Live migration network"
2127931 - Change default time range on Virtualization -> Overview -> Monitoring dashboard to 30 minutes
2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer
2128002 - Error after VM template deletion
2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards
2128872 - [4.11]Can't restore cloned VM
2128948 - Cannot create DataSource from default YAML
2128949 - Cannot create MigrationPolicy from example YAML
2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
2129013 - Mark Windows 11 as TechPreview
2129234 - Service is not deleted along with the VM when the VM is created from a template with service
2129301 - Cloud-init network data don't wipe out on uncheck checkbox 'Add network data'
2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook
2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV
2130588 - crypto-policy : Common Ciphers support by apiserver and hco
2130695 - crypto-policy : Logging Improvement and publish the source of ciphers
2130909 - Non-privileged user can start DataSource creation
2131157 - KV data transfer rate chart in VM Metrics tab is not displayed
2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough
2131674 - Bump virtlogd memory requirement to 20Mi
2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11
2132682 - Default YAML entity name convention.
2132721 - Delete dialogs
2132744 - Description text is missing in Live Migrations section
2132746 - Background is broken in Virtualization Monitoring page
2132783 - VM can not be created from Template with edited boot source
2132793 - Edited Template BSR is not saved
2132932 - Typo in PVC size units menu
2133540 - [pod security violation audit] Audit violation in "cni-plugins" container should be fixed
2133541 - [pod security violation audit] Audit violation in "bridge-marker" container should be fixed
2133542 - [pod security violation audit] Audit violation in "manager" container should be fixed
2133543 - [pod security violation audit] Audit violation in "kube-rbac-proxy" container should be fixed
2133655 - [pod security violation audit] Audit violation in "cdi-operator" container should be fixed
2133656 - [4.12][pod security violation audit] Audit violation in "hostpath-provisioner-operator" container should be fixed
2133659 - [pod security violation audit] Audit violation in "cdi-controller" container should be fixed
2133660 - [pod security violation audit] Audit violation in "cdi-source-update-poller" container should be fixed
2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod
2134672 - [e2e] add data-test-id for catalog -> storage section
2134825 - Authorization for expand-spec endpoint missing
2135805 - Windows 2022 template is missing vTPM and UEFI params in spec
2136051 - Name jumping when trying to create a VM with source from catalog
2136425 - Windows 11 is detected as Windows 10
2136534 - Not possible to specify a TTL on VMExports
2137123 - VMExport: export pod is not PSA complaint
2137241 - Checkbox about delete vm disks is not loaded while deleting VM
2137243 - registery input add docker prefix twice
2137349 - "Manage source" action infinitely loading on DataImportCron details page
2137591 - Inconsistent dialog headings/titles
2137731 - Link of VM status in overview is not working
2137733 - No link for VMs in error status in "VirtualMachine statuses" card
2137736 - The column name "MigrationPolicy name" can just be "Name"
2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly
2138112 - Unsupported S3 endpoint option in Add disk modal
2138119 - "Customize VirtualMachine" flow is not user-friendly because settings are split into 2 modals
2138199 - Win11 and Win22 templates are not filtered properly by Template provider
2138653 - Saving Template prameters reloads the page
2138657 - Setting DATA_SOURCE_ Template parameters makes VM creation fail
2138664 - VM that was created with SSH key fails to start
2139257 - Cannot add disk via "Using an existing PVC"
2139260 - Clone button is disabled while VM is running
2139293 - Non-admin user cannot load VM list page
2139296 - Non-admin cannot load MigrationPolicies page
2139299 - No auto-generated VM name while creating VM by non-admin user
2139306 - Non-admin cannot create VM via customize mode
2139479 - virtualization overview crashes for non-priv user
2139574 - VM name gets "emptyname" if click the create button quickly
2139651 - non-priv user can click create when have no permissions
2139687 - catalog shows template list for non-priv users
2139738 - [4.12]Can't restore cloned VM
2139820 - non-priv user cant reach vm details
2140117 - Provide upgrade path from 4.11.1->4.12.0
2140521 - Click the breadcrumb list about "VirtualMachines" goes to undefined project
2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user
2140627 - Not able to select storageClass if there is no default storageclass defined
2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user
2140808 - Hyperv feature set to "enabled: false" prevents scheduling
2140977 - Alerts number is not correct on Virtualization overview
2140982 - The base template of cloned template is "Not available"
2140998 - Incorrect information shows in overview page per namespace
2141089 - Unable to upload boot images.
2141302 - Unhealthy states alerts and state metrics are missing
2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations
2141494 - "Start in pause mode" option is not available while creating the VM
2141654 - warning log appearing on VMs: found no SR-IOV networks
2141711 - Node column selector is redundant for non-priv user
2142468 - VM action "Stop" should not be disabled when VM in pause state
2142470 - Delete a VM or template from all projects leads to 404 error
2142511 - Enhance alerts card in overview
2142647 - Error after MigrationPolicy deletion
2142891 - VM latency checkup: Failed to create the checkup's Job
2142929 - Permission denied when try get instancestypes
2143268 - Topolvm storageProfile missing accessModes and volumeMode
2143498 - Could not load template while creating VM from catalog
2143964 - Could not load template while creating VM from catalog
2144580 - "?" icon is too big in VM Template Disk tab
2144828 - "?" icon is too big in VM Template Disk tab
2144839 - Alerts number is not correct on Virtualization overview
2153849 - After upgrade to 4.11.1->4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten
2155757 - Incorrect upstream-version label "v1.6.0-unstable-410-g09ea881c" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container
- Description:
The rh-sso-7/sso76-openshift-rhel8 container image and rh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based Middleware Containers to address the following security issues. Users of these images are also encouraged to rebuild all container images that depend on these images.
Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):
2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding 2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens
- JIRA issues fixed (https://issues.jboss.org/):
CIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8 CIAM-4413 - Generate new operator bundle image for this patch
- Summary:
An update is now available for Migration Toolkit for Runtimes (v1.0.1). Bugs fixed (https://bugzilla.redhat.com/):
2142707 - CVE-2022-42920 Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing
- Bugs fixed (https://bugzilla.redhat.com/):
2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2148199 - CVE-2022-39278 Istio: Denial of service attack via a specially crafted message 2148661 - CVE-2022-3962 kiali: error message spoofing in kiali UI 2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be
- JIRA issues fixed (https://issues.jboss.org/):
OSSM-1977 - Support for Istio Gateway API in Kiali OSSM-2083 - Update maistra/istio 2.3 to Istio 1.14.5 OSSM-2147 - Unexpected validation message on Gateway object OSSM-2169 - Member controller doesn't retry on conflict OSSM-2170 - Member namespaces aren't cleaned up when a cluster-scoped SMMR is deleted OSSM-2179 - Wasm plugins only support OCI images with 1 layer OSSM-2184 - Istiod isn't allowed to delete analysis distribution report configmap OSSM-2188 - Member namespaces not cleaned up when SMCP is deleted OSSM-2189 - If multiple SMCPs exist in a namespace, the controller reconciles them all OSSM-2190 - The memberroll controller reconciles SMMRs with invalid name OSSM-2232 - The member controller reconciles ServiceMeshMember with invalid name OSSM-2241 - Remove v2.0 from Create ServiceMeshControlPlane Form OSSM-2251 - CVE-2022-3962 openshift-istio-kiali-container: kiali: content spoofing [ossm-2.3] OSSM-2308 - add root CA certificates to kiali container OSSM-2315 - be able to customize openshift auth timeouts OSSM-2324 - Gateway injection does not work when pods are created by cluster admins OSSM-2335 - Potential hang using Traces scatterplot chart OSSM-2338 - Federation deployment does not need router mode sni-dnat OSSM-2344 - Restarting istiod causes Kiali to flood CRI-O with port-forward requests OSSM-2375 - Istiod should log member namespaces on every update OSSM-2376 - ServiceMesh federation stops working after the restart of istiod pod OSSM-535 - Support validationMessages in SMCP OSSM-827 - ServiceMeshMembers point to wrong SMCP name
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.6.3 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):
2129679 - clusters belong to global clusterset is not selected by placement when rescheduling 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2139085 - RHACM 2.6.3 images 2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment.
Description:
The rsync utility enables the users to copy and synchronize files locally or across a network. Synchronization with rsync is fast because rsync only sends the differences in files over the network instead of sending whole files. The rsync utility is also used as a mirroring tool
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202208-0404", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.6.1" }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "4.3.16" }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.7.1" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "watchos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "9.1" }, { "model": "hci compute node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "management services for element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "37" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.7.1" }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "3.7.31" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "storagegrid", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "3.11.22" }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "4.3.0" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "3.11.0" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "15.7.1" }, { "model": "network security", "scope": "gte", "trust": 1.0, "vendor": "stormshield", "version": "4.6.0" }, { "model": "iphone os", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "16.0" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "16.1" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "4.6.3" }, { "model": "hci", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "network security", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "3.7.34" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.0" }, { "model": "zlib", "scope": "lte", "trust": 1.0, "vendor": "zlib", "version": "1.2.12" } ], "sources": [ { "db": "NVD", "id": "CVE-2022-37434" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:zlib:zlib:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.2.12", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:37:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:storagegrid:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:management_services_for_element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.7.1", "versionStartIncluding": "11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "16.1", "versionStartIncluding": "16.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:watchos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.6.1", "versionStartIncluding": "12.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.7.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "15.7.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.6.3", "versionStartIncluding": "4.6.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.3.16", "versionStartIncluding": "4.3.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.11.22", "versionStartIncluding": "3.11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_network_security:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.7.34", "versionStartIncluding": "3.7.31", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-37434" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "173605" }, { "db": "PACKETSTORM", "id": "173107" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "170898" }, { "db": "PACKETSTORM", "id": "170741" }, { "db": "PACKETSTORM", "id": "170210" }, { "db": "PACKETSTORM", "id": "170759" }, { "db": "PACKETSTORM", "id": "170806" }, { "db": "PACKETSTORM", "id": "170242" }, { "db": "PACKETSTORM", "id": "176559" } ], "trust": 1.1 }, "cve": "CVE-2022-37434", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-37434", "trust": 1.0, "value": "CRITICAL" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-37434" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference). \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* github.com/Masterminds/vcs: Command Injection via argument injection\n(CVE-2022-21235)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, ppc64le, and aarch64 architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags. \n\nThe sha values for the release are\n\n(For x86_64 architecture)\nThe image digest is\nsha256:c6771b12bd873c0e3e5fbc7afa600d92079de6534dcb52f09cb1d22ee49608a9\n\n(For s390x architecture)\nThe image digest is\nsha256:622b5361f95d1d512ea84f363ac06155cbb9ee28e85ccaae1acd80b98b660fa8\n\n(For ppc64le architecture)\nThe image digest is\nsha256:50c131cf85dfb00f258af350a46b85eff8fb8084d3e1617520cd69b59caeaff7\n\n(For aarch64 architecture)\nThe image digest is\nsha256:9e575c4ece9caaf31acbef246ccad71959cd5bf634e7cb284b0849ddfa205ad7\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift CLI (oc)\nor web console. Instructions for upgrading a cluster are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2215317 - CVE-2022-21235 github.com/Masterminds/vcs: Command Injection via argument injection\n\n5. JIRA issues fixed (https://issues.redhat.com/):\n\nOCPBUGS-15446 - (release-4.11) gather \"gateway-mode-config\" config map from \"openshift-network-operator\" namespace\nOCPBUGS-15532 - visiting Configurations page returns error Cannot read properties of undefined (reading \u0027apiGroup\u0027)\nOCPBUGS-15645 - Can\u0027t use git lfs in BuildConfig git source with strategy Docker\nOCPBUGS-15739 - Environment cannot find Python\nOCPBUGS-15758 - [release-4.11] Bump Jenkins and Jenkins Agent Base image versions\nOCPBUGS-15942 - 9% of OKD tests failing on error: tag latest failed: Internal error occurred: registry.centos.org/dotnet/dotnet-31-centos7:latest: Get \"https://registry.centos.org/v2/\": dial tcp: lookup registry.centos.org on 172.30.0.10:53: no such host\nOCPBUGS-15966 - [4.12] MetalLB contains incorrect data Correct and incorrect MetalLB resources coexist should have correct statuses\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.13.0 security and bug fix update\nAdvisory ID: RHSA-2023:3742-02\nProduct: Red Hat OpenShift Data Foundation\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:3742\nIssue date: 2023-06-21\nCVE Names: CVE-2015-20107 CVE-2018-25032 CVE-2020-10735 \n CVE-2020-16250 CVE-2020-16251 CVE-2020-17049 \n CVE-2021-3765 CVE-2021-3807 CVE-2021-4231 \n CVE-2021-4235 CVE-2021-4238 CVE-2021-28861 \n CVE-2021-43519 CVE-2021-43998 CVE-2021-44531 \n CVE-2021-44532 CVE-2021-44533 CVE-2021-44964 \n CVE-2021-46828 CVE-2021-46848 CVE-2022-0670 \n CVE-2022-1271 CVE-2022-1304 CVE-2022-1348 \n CVE-2022-1586 CVE-2022-1587 CVE-2022-2309 \n CVE-2022-2509 CVE-2022-2795 CVE-2022-2879 \n CVE-2022-2880 CVE-2022-3094 CVE-2022-3358 \n CVE-2022-3515 CVE-2022-3517 CVE-2022-3715 \n CVE-2022-3736 CVE-2022-3821 CVE-2022-3924 \n CVE-2022-4415 CVE-2022-21824 CVE-2022-23540 \n CVE-2022-23541 CVE-2022-24903 CVE-2022-26280 \n CVE-2022-27664 CVE-2022-28805 CVE-2022-29154 \n CVE-2022-30635 CVE-2022-31129 CVE-2022-32189 \n CVE-2022-32190 CVE-2022-33099 CVE-2022-34903 \n CVE-2022-35737 CVE-2022-36227 CVE-2022-37434 \n CVE-2022-38149 CVE-2022-38900 CVE-2022-40023 \n CVE-2022-40303 CVE-2022-40304 CVE-2022-40897 \n CVE-2022-41316 CVE-2022-41715 CVE-2022-41717 \n CVE-2022-41723 CVE-2022-41724 CVE-2022-41725 \n CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 \n CVE-2022-42898 CVE-2022-42919 CVE-2022-43680 \n CVE-2022-45061 CVE-2022-45873 CVE-2022-46175 \n CVE-2022-47024 CVE-2022-47629 CVE-2022-48303 \n CVE-2022-48337 CVE-2022-48338 CVE-2022-48339 \n CVE-2023-0361 CVE-2023-0620 CVE-2023-0665 \n CVE-2023-2491 CVE-2023-22809 CVE-2023-24329 \n CVE-2023-24999 CVE-2023-25000 CVE-2023-25136 \n=====================================================================\n\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available in Red Hat Container Registry for Red Hat OpenShift Data\nFoundation 4.13.0 on Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as\nrandom as they should be (CVE-2021-4238)\n\n* decode-uri-component: improper input validation resulting in DoS\n(CVE-2022-38900)\n\n* vault: Hashicorp Vault AWS IAM Integration Authentication Bypass\n(CVE-2020-16250)\n\n* vault: GCP Auth Method Allows Authentication Bypass (CVE-2020-16251)\n\n* nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching\nANSI escape codes (CVE-2021-3807)\n\n* go-yaml: Denial of Service in go-yaml (CVE-2021-4235)\n\n* vault: incorrect policy enforcement (CVE-2021-43998)\n\n* nodejs: Improper handling of URI Subject Alternative Names\n(CVE-2021-44531)\n\n* nodejs: Certificate Verification Bypass via String Injection\n(CVE-2021-44532)\n\n* nodejs: Incorrect handling of certificate subject and issuer fields\n(CVE-2021-44533)\n\n* golang: archive/tar: unbounded memory consumption when reading headers\n(CVE-2022-2879)\n\n* golang: net/http/httputil: ReverseProxy should not forward unparseable\nquery parameters (CVE-2022-2880)\n\n* nodejs-minimatch: ReDoS via the braceExpand function (CVE-2022-3517)\n\n* jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to\nsignature validation bypass (CVE-2022-23540)\n\n* jsonwebtoken: Insecure implementation of key retrieval function could\nlead to Forgeable Public/Private Tokens from RSA to HMAC (CVE-2022-23541)\n\n* golang: net/http: handle server errors after sending GOAWAY\n(CVE-2022-27664)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/url: JoinPath does not strip relative path components in all\ncircumstances (CVE-2022-32190)\n\n* consul: Consul Template May Expose Vault Secrets When Processing Invalid\nInput (CVE-2022-38149)\n\n* vault: insufficient certificate revocation list checking (CVE-2022-41316)\n\n* golang: regexp/syntax: limit memory used by parsing regexps\n(CVE-2022-41715)\n\n* golang: net/http: excessive memory growth in a Go server accepting HTTP/2\nrequests (CVE-2022-41717)\n\n* net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK\ndecoding (CVE-2022-41723)\n\n* golang: crypto/tls: large handshake records may cause panics\n(CVE-2022-41724)\n\n* golang: net/http, mime/multipart: denial of service from excessive\nresource consumption (CVE-2022-41725)\n\n* json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)\n\n* vault: Vault\u2019s Microsoft SQL Database Storage Backend Vulnerable to SQL\nInjection Via Configuration File (CVE-2023-0620)\n\n* hashicorp/vault: Vault\u2019s PKI Issuer Endpoint Did Not Correctly Authorize\nAccess to Issuer Metadata (CVE-2023-0665)\n\n* Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to\nRole During a Destroy Operation (CVE-2023-24999)\n\n* hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n(CVE-2023-25000)\n\n* validator: Inefficient Regular Expression Complexity in Validator.js\n(CVE-2021-3765)\n\n* nodejs: Prototype pollution via console.table properties (CVE-2022-21824)\n\n* golang: math/big: decoding big.Float and big.Rat types can panic if the\nencoded message is too short, potentially allowing a denial of service\n(CVE-2022-32189)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images that provide numerous bug fixes and enhancements. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1786696 - UI-\u003eDashboards-\u003eOverview-\u003eAlerts shows MON components are at different versions, though they are NOT\n1855339 - Wrong version of ocs-storagecluster\n1943137 - [Tracker for BZ #1945618] rbd: Storage is not reclaimed after persistentvolumeclaim and job that utilized it are deleted\n1944687 - [RFE] KMS server connection lost alert\n1989088 - [4.8][Multus] UX experience issues and enhancements\n2005040 - Uninstallation of ODF StorageSystem via OCP Console fails, gets stuck in Terminating state\n2005830 - [DR] DRPolicy resource should not be editable after creation\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2028193 - CVE-2021-43998 vault: incorrect policy enforcement\n2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names\n2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection\n2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields\n2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties\n2042914 - [Tracker for BZ #2013109] [UI] Refreshing web console from the pop-up is taking to Install Operator page. \n2052252 - CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2022-21824 [CVE] nodejs: various flaws [openshift-data-foundation-4]\n2101497 - ceph_mon_metadata metrics are not collected properly\n2101916 - must-gather is not collecting ceph logs or coredumps\n2102304 - [GSS] Remove the entry of removed node from Storagecluster under Node Topology\n2104148 - route ocs-storagecluster-cephobjectstore misconfigured to use http and https on same http route in haproxy.config\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2115020 - [RDR] Sync schedule is not removed from mirrorpeer yaml after DR Policy is deleted\n2115616 - [GSS] failing to change ownership of the NFS based PVC for PostgreSQL pod by using kube_pv_chown utility\n2119551 - CVE-2022-38149 consul: Consul Template May Expose Vault Secrets When Processing Invalid Input\n2120098 - [RDR] Even before an action gets fully completed, PeerReady and Available are reported as True in the DRPC yaml\n2120944 - Large Omap objects found in pool \u0027ocs-storagecluster-cephfilesystem-metadata\u0027\n2124668 - CVE-2022-32190 golang: net/url: JoinPath does not strip relative path components in all circumstances\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2126299 - CVE-2021-3765 validator: Inefficient Regular Expression Complexity in Validator.js\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2135339 - CVE-2022-41316 vault: insufficient certificate revocation list checking\n2139037 - [cee/sd]Unable to access s3 via RGW route ocs-storagecluster-cephobjectstore\n2141095 - [RDR] Storage System page on ACM Hub is visible even when data observability is not enabled\n2142651 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters\n2142894 - Credentials are ignored when creating a Backing/Namespace store after prompted to enter a name for the resource\n2142941 - RGW cloud Transition. HEAD/GET requests to MCG are failing with 403 error\n2143944 - [GSS] unknown parameter name \"FORCE_OSD_REMOVAL\"\n2144256 - [RDR] [UI] DR Application applied to a single DRPolicy starts showing connected to multiple policies due to console flickering\n2151903 - [MCG] Azure bs/ns creation fails with target bucket does not exists\n2152143 - [Noobaa Clone] Secrets are used in env variables\n2154250 - NooBaa Bucket Quota alerts are not working\n2155507 - RBD reclaimspace job fails when the PVC is not mounted\n2155743 - ODF Dashboard fails to load\n2156067 - [RDR] [UI] When Peer Ready isn\u0027t True, UI doesn\u0027t reset the error message even when no subscription group is selected\n2156069 - [UI] Instances of OCS can be seen on BlockPool action modals\n2156263 - CVE-2022-46175 json5: Prototype Pollution in JSON5 via Parse Method\n2156519 - 4.13: odf-csi-addons-operator failed with OwnNamespace InstallModeType not supported\n2156727 - CVE-2021-4235 go-yaml: Denial of Service in go-yaml\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2157876 - [OCP Tracker] [UI] When OCP and ODF are upgraded, refresh web console pop-up doesn\u0027t appear after ODF upgrade resulting in dashboard crash\n2158922 - Namespace store fails to get created via the ODF UI\n2159676 - rbd-mirror logs are rotated very frequently, increase the default maxlogsize for rbd-mirror\n2161274 - CVE-2022-41717 golang: net/http: excessive memory growth in a Go server accepting HTTP/2 requests\n2161879 - logging issue when deleting webhook resources\n2161937 - collect kernel and journal logs from all worker nodes\n2162257 - [RDR][CEPHFS] sync/replication is getting stopped for some pvc\n2164617 - Unable to expand ocs-storagecluster-ceph-rbd PVCs provisioned in Filesystem mode\n2165495 - Placement scheduler is using too much resources\n2165504 - Sizer sharing link is broken\n2165929 - [RFE] ODF bluewash introduction in 4.12.x\n2165938 - ocs-operator CSV is missing disconnected env annotation. \n2165984 - [RDR] Replication stopped for images is represented with incorrect color\n2166222 - CSV is missing disconnected env annotation and relatedImages spec\n2166234 - Application user unable to invoke Failover and Relocate actions\n2166869 - Match the version of consoleplugin to odf operator\n2167299 - [RFE] ODF bluewash introduction in 4.12.x\n2167308 - [mcg-clone] Security and VA issues with ODF operator\n2167337 - CVE-2020-16250 vault: Hashicorp Vault AWS IAM Integration Authentication Bypass\n2167340 - CVE-2020-16251 vault: GCP Auth Method Allows Authentication Bypass\n2167946 - CSV is missing disconnected env annotation and relatedImages spec\n2168113 - [Ceph Tracker BZ #2141110] [cee/sd][Bluestore] Newly deployed bluestore OSD\u0027s showing high fragmentation score\n2168635 - fix redirect link to operator details page (OCS dashboard)\n2168840 - [Fusion-aaS][ODF 4.13]Within \u0027prometheus-ceph-rules\u0027 the namespace for \u0027rook-ceph-mgr\u0027 jobs should be configurable. \n2168849 - Must-gather doesn\u0027t collect coredump logs crucial for OSD crash events\n2169375 - CVE-2022-23541 jsonwebtoken: Insecure implementation of key retrieval function could lead to Forgeable Public/Private Tokens from RSA to HMAC\n2169378 - CVE-2022-23540 jsonwebtoken: Insecure default algorithm in jwt.verify() could lead to signature validation bypass\n2169779 - [vSphere]: rook-ceph-mon-* pvc are in pending state\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2170673 - [RDR] Different replication states of PVC images aren\u0027t correctly distinguished and representated on UI\n2172089 - [Tracker for Ceph BZ 2174461] rook-ceph-nfs pod is stuck at status \u0027CreateContainerError\u0027 after enabling NFS in ODF 4.13\n2172365 - [csi-addons] odf-csi-addons-operator oomkilled with fresh installation 4.12\n2172521 - No OSD pods are created for 4.13 LSO deployment\n2173161 - ODF-console can not start when you disable IPv6 on Node with kernel parameter. \n2173528 - Creation of OCS operator tag automatically for verified commits\n2173534 - When on StorageSystem details click on History back btn it shows blank body\n2173926 - [RFE] Include changes in MCG for new Ceph RGW transition headers\n2175612 - noobaa-core-0 crashing and storagecluster not getting to ready state during ODF deployment with FIPS enabled in 4.13cluster\n2175685 - RGW OBC creation via the UI is blocked by \"Address form errors to proceed\" error\n2175714 - UI fix- capitalization\n2175867 - Rook sets cephfs kernel mount options even when mon is using v1 port\n2176080 - odf must-gather should collect output of oc get hpa -n openshift-storage\n2176456 - [RDR] ramen-hub-operator and ramen-dr-cluster-operator is going into CLBO post deployment\n2176739 - [UI] CSI Addons operator icon is broken\n2176776 - Enable save options only when the protected apps has labels for manage DRPolicy\n2176798 - [IBM Z ] Multi Cluster Orchestrator operator is not available in the Operator Hub\n2176809 - [IBM Z ] DR operator is not available in the Operator Hub\n2177134 - Next button if disabled for storage system deployment flow for IBM Ceph Storage security and network step when there is no OCS installed already\n2177221 - Enable DR dashboard only when ACM observability is enabled\n2177325 - Noobaa-db pod is taking longer time to start up in ODF 4.13\n2177695 - DR dashbaord showing incorrect RPO data\n2177844 - CVE-2023-24999 Hashicorp/vault: Vault Fails to Verify if Approle SecretID Belongs to Role During a Destroy Operation\n2178033 - node topology warnings tab doesn\u0027t show pod warnings\n2178358 - CVE-2022-41723 net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding\n2178488 - CVE-2022-41725 golang: net/http, mime/multipart: denial of service from excessive resource consumption\n2178492 - CVE-2022-41724 golang: crypto/tls: large handshake records may cause panics\n2178588 - No rack names on ODF Topology\n2178619 - odf-operator failing to resolve its sub-dependencies leaving the ocs-consumer/provider addon in a failed and halted state\n2178682 - [GSS] Add the valid AWS GovCloud regions in OCS UI. \n2179133 - [UI] A blank page appears while selecting Storage Pool for creating Encrypted Storage Class\n2179337 - Invalid storage system href link on the ODF multicluster dashboard\n2179403 - (4.13) Mons are failing to start when msgr2 is required with RHCS 6.1\n2179846 - [IBM Z] In RHCS external mode Cephobjectstore creation fails as it reports that the \"object store name cannot be longer than 38 characters\"\n2179860 - [MCG] Bucket replication with deletion sync isn\u0027t complete\n2179976 - [ODF 4.13] Missing the status-reporter binary causing pods \"report-status-to-provider\" remain in CreateContainerError on ODF to ODF cluster on ROSA\n2179981 - ODF Topology search bar mistakes to find searched node/pod\n2179997 - Topology. Exit full screen does not appear in Full screen mode\n2180211 - StorageCluster stuck in progressing state for Thales KMS deployment\n2180397 - Last sync time is missing on application set\u0027s disaster recovery status popover\n2180440 - odf-monitoring-tool. YAML file misjudged as corrupted\n2180921 - Deployment with external cluster in ODF 4.13 with unable to use cephfs as backing store for image_registry\n2181112 - [RDR] [UI] Hide disable DR functionality as it would be un-tested in 4.13\n2181133 - CI: backport E2E job improvements\n2181446 - [KMS][UI] PVC provisioning failed in case of vault kubernetes authentication is configured. \n2181535 - [GSS] Object storage in degraded state\n2181551 - Build: move to \u0027dependencies\u0027 the ones required for running a build\n2181832 - Create OBC via UI, placeholder on StorageClass dropped\n2181949 - [ODF Tracker] [RFE] Catch MDS damage to the dentry\u0027s first snapid\n2182041 - OCS-Operator expects NooBaa CRDs to be present on the cluster when installed directly without ODF Operator\n2182296 - [Fusion-aaS][ODF 4.13]must-gather does not collect relevant logs when storage cluster is not in openshift-storage namespace\n2182375 - [MDR] Not able to fence DR clusters\n2182644 - [IBM Z] MDR policy creation fails unless the ocs-operator pod is restarted on the managed clusters\n2182664 - Topology view should hide the sidebar when changing levels\n2182703 - [RDR] After upgrading from 4.12.2 to 4.13.0 version.odf.openshift.io cr is not getting updated with latest ODF version\n2182972 - CVE-2023-25000 hashicorp/vault: Cache-Timing Attacks During Seal and Unseal Operations\n2182981 - CVE-2023-0665 hashicorp/vault: Vault?s PKI Issuer Endpoint Did Not Correctly Authorize Access to Issuer Metadata\n2183155 - failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call\n2183196 - [Fusion-aaS] Collect Must-gather logs from the managed-fusion agent namesapce\n2183266 - [Fusion aaS Rook ODF 4.13]] Rook-ceph-operator pod should allow OBC CRDs to be optional instead of causing a crash when not present\n2183457 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]\n2183478 - [MDR][UI] Cannot relocate subscription based apps, Appset based apps are possible to relocate\n2183520 - [Fusion-aaS] csi-cephfs-plugin pods are not created after installing ocs-client-operator\n2184068 - [Fusion-aaS] Failed to mount CephFS volumes while creating pods\n2184605 - [ODF 4.13][Fusion-aaS] OpenShift Data Foundation Client operator is listed in OperatorHub and installable from UI\n2184663 - CVE-2023-0620 vault: Vault?s Microsoft SQL Database Storage Backend Vulnerable to SQL Injection Via Configuration File\n2184769 - {Fusion-aaS][ODF 4.13]Remove storageclassclaim cr and create new cr storageclass request cr\n2184773 - multicluster-orchestrator should not reset spec.network.multiClusterService.Enabled field added by user\n2184892 - Don\u0027t pass encryption options to ceph cluster in odf external mode to provider/consumer cluster\n2184984 - Topology Sidebar alerts panel: alerts accordion does not toggle when clicking on alert severity text\n2185164 - [KMS][VAULT] PVC provisioning is failing when the Vault (HCP) Kubernetes authentication is set. \n2185188 - Fix storagecluster watch request for OCSInitialization\n2185757 - add NFS dashboard\n2185871 - [MDR][ACM-Tracker] Deleting an Appset based application does not delete its placement\n2186171 - [GSS] \"disableLoadBalancerService: true\" config is reconciled after modifying the number of NooBaa endpoints\n2186225 - [RDR] when running any ceph cmd we see error 2023-03-31T08:25:31.844+0000 7f8deaffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]\n2186475 - handle different network connection spec \u0026 Pass appropriate options for all the cases of Network Spec\n2186752 - [translations] add translations for 4.13\n2187251 - sync ocs and odf with the latest rook\n2187296 - [MCG] Can\u0027t opt out of deletions sync once log-based replication with deletions sync is set\n2187736 - [RDR] Replication history graph is showing incorrect value\n2187952 - When cluster controller is cancelled frequently, multiple simultaneous controllers cause issues since need to wait for shutdown before continuing new controller\n2187969 - [ODFMS-Migration ] [OCS Client Operator] csi-rbdplugin stuck in ImagePullBackOff on consumer clusters after Migration\n2187986 - [MDR] ramen-dr-cluster-operator pod is in CLBO after assigning dr policy to an appset based app\n2188053 - ocs-metrics-exporter cannot list/watch StorageCluster, StorageClass, CephBlockPool and other resources\n2188238 - [RDR] Avoid using the terminologies \"SLA\" in DR dashbaord\n2188303 - [RDR] Maintenance mode is not enabled after initiating failover action\n2188427 - [External mode upgrade]: Upgrade from 4.12 -\u003e 4.13 external mode is failing because rook-ceph-operator is not reaching clean state\n2188666 - wrong label in new storageclassrequest cr\n2189483 - After upgrade noobaa-db-pg-0 pod using old image in one of container\n2189929 - [RDR/MDR] [UI] Dashboard fon size are very uneven\n2189982 - [RDR] ocs_rbd_client_blocklisted datapoints and the corresponding alert is not getting generated\n2189984 - [KMS][VAULT] Storage cluster remains in \u0027Progressing\u0027 state during deployment with storage class encryption, despite all pods being up and running. \n2190129 - OCS Provider Server logs are incorrect\n2190241 - nfs metric details are unavailable and server health is displaying as \"Degraded\" under Network file system tab in UI\n2192088 - [IBM P] rbd_default_map_options value not set to ms_mode=secure in in-transit encryption enabled ODF cluster\n2192670 - Details tab for nodes inside Topology throws \"Something went wrong\" on IBM Power platform\n2192824 - [4.13] Fix Multisite in external cluster\n2192875 - Enable ceph-exporter in rook\n2193114 - MCG replication is failing due to OC binary incompatible on Power platform\n2193220 - [Stretch cluster] CephCluster is updated frequently due to changing ordering of zones\n2196176 - MULTUS UI, There is no option to change the multus configuration after we configure the params\n2196236 - [RDR] With ACM 2.8 User is not able to apply Drpolicy to subscription workload\n2196298 - [RDR] DRPolicy doesn\u0027t show connected application when subscription based workloads are deployed via CLI\n2203795 - ODF Monitoring is missing some of the ceph_* metric values\n2208029 - nfs server health is always displaying as \"Degraded\" under Network file system tab in UI. \n2208079 - rbd mirror daemon is commonly not upgraded\n2208269 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep\n2208558 - [MDR] ramen-dr-cluster-operator pod crashes during failover\n2208962 - [UI] ODF Topology. Degraded cluster don\u0027t show red canvas on cluster level\n2209364 - ODF dashboard crashes when OCP and ODF are upgraded\n2209643 - Multus, Cephobjectstore stuck on Progressing state because \" failed to create or retrieve rgw admin ops user\"\n2209695 - When collecting Must-gather logs shows /usr/bin/gather_ceph_resources: line 341: jq: command not found\n2210964 - [UI][MDR] After hub recovery in overview tab of data policies Application set apps count is not showing\n2211334 - The replication history graph is very unclear\n2211343 - [MCG-Only]: upgrade failed from 4.12 to 4.13 due to missing CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config\n2211704 - Multipart uploads fail to a Azure namespace bucket when user MD is sent as part of the upload\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2015-20107\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2020-10735\nhttps://access.redhat.com/security/cve/CVE-2020-16250\nhttps://access.redhat.com/security/cve/CVE-2020-16251\nhttps://access.redhat.com/security/cve/CVE-2020-17049\nhttps://access.redhat.com/security/cve/CVE-2021-3765\nhttps://access.redhat.com/security/cve/CVE-2021-3807\nhttps://access.redhat.com/security/cve/CVE-2021-4231\nhttps://access.redhat.com/security/cve/CVE-2021-4235\nhttps://access.redhat.com/security/cve/CVE-2021-4238\nhttps://access.redhat.com/security/cve/CVE-2021-28861\nhttps://access.redhat.com/security/cve/CVE-2021-43519\nhttps://access.redhat.com/security/cve/CVE-2021-43998\nhttps://access.redhat.com/security/cve/CVE-2021-44531\nhttps://access.redhat.com/security/cve/CVE-2021-44532\nhttps://access.redhat.com/security/cve/CVE-2021-44533\nhttps://access.redhat.com/security/cve/CVE-2021-44964\nhttps://access.redhat.com/security/cve/CVE-2021-46828\nhttps://access.redhat.com/security/cve/CVE-2021-46848\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1304\nhttps://access.redhat.com/security/cve/CVE-2022-1348\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1587\nhttps://access.redhat.com/security/cve/CVE-2022-2309\nhttps://access.redhat.com/security/cve/CVE-2022-2509\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/cve/CVE-2022-2879\nhttps://access.redhat.com/security/cve/CVE-2022-2880\nhttps://access.redhat.com/security/cve/CVE-2022-3094\nhttps://access.redhat.com/security/cve/CVE-2022-3358\nhttps://access.redhat.com/security/cve/CVE-2022-3515\nhttps://access.redhat.com/security/cve/CVE-2022-3517\nhttps://access.redhat.com/security/cve/CVE-2022-3715\nhttps://access.redhat.com/security/cve/CVE-2022-3736\nhttps://access.redhat.com/security/cve/CVE-2022-3821\nhttps://access.redhat.com/security/cve/CVE-2022-3924\nhttps://access.redhat.com/security/cve/CVE-2022-4415\nhttps://access.redhat.com/security/cve/CVE-2022-21824\nhttps://access.redhat.com/security/cve/CVE-2022-23540\nhttps://access.redhat.com/security/cve/CVE-2022-23541\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-26280\nhttps://access.redhat.com/security/cve/CVE-2022-27664\nhttps://access.redhat.com/security/cve/CVE-2022-28805\nhttps://access.redhat.com/security/cve/CVE-2022-29154\nhttps://access.redhat.com/security/cve/CVE-2022-30635\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/cve/CVE-2022-32189\nhttps://access.redhat.com/security/cve/CVE-2022-32190\nhttps://access.redhat.com/security/cve/CVE-2022-33099\nhttps://access.redhat.com/security/cve/CVE-2022-34903\nhttps://access.redhat.com/security/cve/CVE-2022-35737\nhttps://access.redhat.com/security/cve/CVE-2022-36227\nhttps://access.redhat.com/security/cve/CVE-2022-37434\nhttps://access.redhat.com/security/cve/CVE-2022-38149\nhttps://access.redhat.com/security/cve/CVE-2022-38900\nhttps://access.redhat.com/security/cve/CVE-2022-40023\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/cve/CVE-2022-40897\nhttps://access.redhat.com/security/cve/CVE-2022-41316\nhttps://access.redhat.com/security/cve/CVE-2022-41715\nhttps://access.redhat.com/security/cve/CVE-2022-41717\nhttps://access.redhat.com/security/cve/CVE-2022-41723\nhttps://access.redhat.com/security/cve/CVE-2022-41724\nhttps://access.redhat.com/security/cve/CVE-2022-41725\nhttps://access.redhat.com/security/cve/CVE-2022-42010\nhttps://access.redhat.com/security/cve/CVE-2022-42011\nhttps://access.redhat.com/security/cve/CVE-2022-42012\nhttps://access.redhat.com/security/cve/CVE-2022-42898\nhttps://access.redhat.com/security/cve/CVE-2022-42919\nhttps://access.redhat.com/security/cve/CVE-2022-43680\nhttps://access.redhat.com/security/cve/CVE-2022-45061\nhttps://access.redhat.com/security/cve/CVE-2022-45873\nhttps://access.redhat.com/security/cve/CVE-2022-46175\nhttps://access.redhat.com/security/cve/CVE-2022-47024\nhttps://access.redhat.com/security/cve/CVE-2022-47629\nhttps://access.redhat.com/security/cve/CVE-2022-48303\nhttps://access.redhat.com/security/cve/CVE-2022-48337\nhttps://access.redhat.com/security/cve/CVE-2022-48338\nhttps://access.redhat.com/security/cve/CVE-2022-48339\nhttps://access.redhat.com/security/cve/CVE-2023-0361\nhttps://access.redhat.com/security/cve/CVE-2023-0620\nhttps://access.redhat.com/security/cve/CVE-2023-0665\nhttps://access.redhat.com/security/cve/CVE-2023-2491\nhttps://access.redhat.com/security/cve/CVE-2023-22809\nhttps://access.redhat.com/security/cve/CVE-2023-24329\nhttps://access.redhat.com/security/cve/CVE-2023-24999\nhttps://access.redhat.com/security/cve/CVE-2023-25000\nhttps://access.redhat.com/security/cve/CVE-2023-25136\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBZJTCdtzjgjWX9erEAQg+Bw/8DMJst89ezTMnzgSKR5q+EzfkajgA1+hZ\npk9CcsCzrIISkbi+6uvkfRPe7hwHstigfswCsuh4d98lad20WKw9UUYMsFOQlGW5\nIzzxf5a1Uw/pdO/61f4k6Ze7E4gANneknQiiiUFpA4lF7RkuBoeWYoB12r+Y3O/t\nl8CGEVAk/DBn2WVc5PL7o7683A6tS8Z5FNpyPg2tvtpdYkr1cw2+L2mcBHpiAjUr\nS+Jaj5/qf8Z/TIZY7vvOqr6YCDrMnbZChbvYaPCwaRqbOb1RbGW++c9hEWKnaNbm\nXiIgTY4d75+y7afRFoc9INZ1SjvL7476LCABGXmEEocuwHRU7K4u4rGyOXzDz5xb\n3zgJO58oVr6RPHvpDsxoqOwEbhfdNpRpBcuuzAThe9w5Cnh45UnEU5sJKY/1U1qo\nUxBeMoFrrhUdrE4A1Gsr0GcImh6JDJXweIJe1C6FI9e3/J5HM7mR4Whznz+DslXL\nCNmmPWs5afjrrgVVaDuDYq3m7lwuCTODHRVSeWGrtyhnNc6RNtjJi9fumqavP07n\n8lc4v4c56lMVDpwQQkYMCJEzHrYDWeFDza9KdDbddvLtkoYXxJQiGwp0BZne1ArV\nlU3PstRRagnbV6yf/8LPSaSQZAVBnEe2YoF83gJbpFEhYimOCHS9BzC0qce7lypR\nvhbUlNurVkU=\n=4jwh\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nSecurity fixes:\n\n* CVE-2022-27664 golang: net/http: handle server errors after sending\nGOAWAY\n* CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward\nunparseable query parameters\n* CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing\nregexps\n* CVE-2022-41717 golang: net/http: An attacker can cause excessive memory\ngrowth in a Go server accepting HTTP/2 requests\n\nBugs addressed:\n\n* subctl diagnose firewall metrics does not work on merged kubeconfig (BZ#\n2013711)\n* [Submariner] - Fails to increase gateway amount after deployment (BZ#\n2097381)\n* Submariner gateway node does not get deleted with subctl cloud cleanup\ncommand (BZ# 2108634)\n* submariner GW pods are unable to resolve the DNS of the Broker K8s API\nURL (BZ# 2119362)\n* Submariner gateway node does not get deployed after applying\nManagedClusterAddOn on Openstack (BZ# 2124219)\n* unable to run subctl benchmark latency, pods fail with ImagePullBackOff\n(BZ# 2130326)\n* [IBM Z] - Submariner addon unistallation doesnt work from ACM console\n(BZ# 2136442)\n* Tags on AWS security group for gateway node break cloud-controller\nLoadBalancer (BZ# 2139477)\n* RHACM - Submariner: UI support for OpenStack #19297 (ACM-1242)\n* Submariner OVN support (ACM-1358)\n* Submariner Azure Console support (ACM-1388)\n* ManagedClusterSet consumers migrate to v1beta2 (ACM-1614)\n* Submariner on disconnected ACM #22000 (ACM-1678)\n* Submariner gateway: Error creating AWS security group if already exists\n(ACM-2055)\n* Submariner gateway security group in AWS not deleted when uninstalling\nsubmariner (ACM-2057)\n* The submariner-metrics-proxy pod pulls an image with wrong naming\nconvention (ACM-2058)\n* The submariner-metrics-proxy pod is not part of the Agent readiness check\n(ACM-2067)\n* Subctl 0.14.0 prints version \"vsubctl\" (ACM-2132)\n* managedclusters \"local-cluster\" not found and missing Submariner Broker\nCRD (ACM-2145)\n* Add support of ARO to Submariner deployment (ACM-2150)\n* The e2e tests execution fails for \"Basic TCP connectivity\" tests\n(ACM-2204)\n* Gateway error shown \"diagnose all\" tests (ACM-2206)\n* Submariner does not support cluster \"kube-proxy ipvs mode\"(ACM-2211)\n* Vsphere cluster shows Pod Security admission controller warnings\n(ACM-2256)\n* Cannot use submariner with OSP and self signed certs (ACM-2274)\n* Subctl diagnose tests spawn nettest image with wrong tag nameing\nconvention (ACM-2387)\n* Subctl 0.14.1 prints version \"devel\" (ACM-2482)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2013711 - subctl diagnose firewall metrics does not work on merged kubeconfig\n2097381 - [Submariner] - Fails to increase gateway amount after deployment\n2108634 - Submariner gateway node does not get deleted with subctl cloud cleanup command\n2119362 - submariner GW pods are unable to resolve the DNS of the Broker K8s API URL\n2124219 - Submariner gateway node does not get deployed after applying ManagedClusterAddOn on Openstack\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2130326 - unable to run subctl benchmark latency, pods fail with ImagePullBackOff\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2136442 - [IBM Z] - Submariner addon unistallation doesnt work from ACM console\n2139477 - Tags on AWS security group for gateway node break cloud-controller LoadBalancer\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nACM-1614 - ManagedClusterSet consumers migrate to v1beta2 (Submariner)\nACM-2055 - Submariner gateway: Error creating AWS security group if already exists\nACM-2057 - [Submariner] - submariner gateway security group in aws not deleted when uninstalling submariner\nACM-2058 - [Submariner] - The submariner-metrics-proxy pod pulls an image with wrong naming convention\nACM-2067 - [Submariner] - The submariner-metrics-proxy pod is not part of the Agent readiness check\nACM-2132 - Subctl 0.14.0 prints version \"vsubctl\"\nACM-2145 - managedclusters \"local-cluster\" not found and missing Submariner Broker CRD\nACM-2150 - Add support of ARO to Submariner deployment\nACM-2204 - [Submariner] - e2e tests execution fails for \"Basic TCP connectivity\" tests\nACM-2206 - [Submariner] - Gateway error shown \"diagnose all\" tests\nACM-2211 - [Submariner] - Submariner does not support cluster \"kube-proxy ipvs mode\"\nACM-2256 - [Submariner] - Vsphere cluster shows Pod Security admission controller warnings\nACM-2274 - Cannot use submariner with OSP and self signed certs\nACM-2387 - [Submariner] - subctl diagnose tests spawn nettest image with wrong tag nameing convention\nACM-2482 - Subctl 0.14.1 prints version \"devel\"\n\n6. This advisory contains the following\nOpenShift Virtualization 4.12.0 images:\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: encoding/gob: stack exhaustion in Decoder.Decode (CVE-2022-30635)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\n* golang: crypto/tls: session tickets lack random ticket_age_add\n(CVE-2022-30629)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nRHEL-8-CNV-4.12\n\n=============\nbridge-marker-container-v4.12.0-24\ncluster-network-addons-operator-container-v4.12.0-24\ncnv-containernetworking-plugins-container-v4.12.0-24\ncnv-must-gather-container-v4.12.0-58\nhco-bundle-registry-container-v4.12.0-769\nhostpath-csi-driver-container-v4.12.0-30\nhostpath-provisioner-container-v4.12.0-30\nhostpath-provisioner-operator-container-v4.12.0-31\nhyperconverged-cluster-operator-container-v4.12.0-96\nhyperconverged-cluster-webhook-container-v4.12.0-96\nkubemacpool-container-v4.12.0-24\nkubevirt-console-plugin-container-v4.12.0-182\nkubevirt-ssp-operator-container-v4.12.0-64\nkubevirt-tekton-tasks-cleanup-vm-container-v4.12.0-55\nkubevirt-tekton-tasks-copy-template-container-v4.12.0-55\nkubevirt-tekton-tasks-create-datavolume-container-v4.12.0-55\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.12.0-55\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.12.0-55\nkubevirt-tekton-tasks-modify-vm-template-container-v4.12.0-55\nkubevirt-tekton-tasks-operator-container-v4.12.0-40\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.12.0-55\nkubevirt-template-validator-container-v4.12.0-32\nlibguestfs-tools-container-v4.12.0-255\novs-cni-marker-container-v4.12.0-24\novs-cni-plugin-container-v4.12.0-24\nvirt-api-container-v4.12.0-255\nvirt-artifacts-server-container-v4.12.0-255\nvirt-cdi-apiserver-container-v4.12.0-72\nvirt-cdi-cloner-container-v4.12.0-72\nvirt-cdi-controller-container-v4.12.0-72\nvirt-cdi-importer-container-v4.12.0-72\nvirt-cdi-operator-container-v4.12.0-72\nvirt-cdi-uploadproxy-container-v4.12.0-71\nvirt-cdi-uploadserver-container-v4.12.0-72\nvirt-controller-container-v4.12.0-255\nvirt-exportproxy-container-v4.12.0-255\nvirt-exportserver-container-v4.12.0-255\nvirt-handler-container-v4.12.0-255\nvirt-launcher-container-v4.12.0-255\nvirt-operator-container-v4.12.0-255\nvirtio-win-container-v4.12.0-10\nvm-network-latency-checkup-container-v4.12.0-89\n\n3. Solution:\n\nBefore applying this update, you must apply all previously released errata\nrelevant to your system. \n\nTo apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1719190 - Unable to cancel live-migration if virt-launcher pod in pending state\n2023393 - [CNV] [UI]Additional information needed for cloning when default storageclass in not defined in target datavolume\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2040377 - Unable to delete failed VMIM after VM deleted\n2046298 - mdevs not configured with drivers installed, if mdev config added to HCO CR before drivers are installed\n2052556 - Metric \"kubevirt_num_virt_handlers_by_node_running_virt_launcher\" reporting incorrect value\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2060499 - [RFE] Cannot add additional service (or other objects) to VM template\n2069098 - Large scale |VMs migration is slow due to low migration parallelism\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2071491 - Storage Throughput metrics are incorrect in Overview\n2072797 - Metrics in Virtualization -\u003e Overview period is not clear or configurable\n2072821 - Top Consumers of Storage Traffic in Kubevirt Dashboard giving unexpected numbers\n2079916 - KubeVirt CR seems to be in DeploymentInProgress state and not recovering\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2086285 - [dark mode] VirtualMachine - in the Utilization card the percentages and the graphs not visible enough in dark mode\n2086551 - Min CPU feature found in labels\n2087724 - Default template show no boot source even there are auto-upload boot sources\n2088129 - [SSP] webhook does not comply with restricted security context\n2088464 - [CDI] cdi-deployment does not comply with restricted security context\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2089744 - HCO should label its control plane namespace to admit pods at privileged security level\n2089751 - 4.12.0 containers\n2089804 - 4.12.0 rpms\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2092796 - [RFE] CPU|Memory display in the template card is not consistent with the display in the template drawer\n2093771 - The disk source should be PVC if the template has no auto-update boot source\n2093996 - kubectl get vmi API should always return primary interface if exist\n2094202 - Cloud-init username field should have hint\n2096285 - KubeVirt CR API documentation is missing docs for many fields\n2096780 - [RFE] Add ssh-key and sysprep to template scripts tab\n2097436 - Online disk expansion ignores filesystem overhead change\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2099556 - [RFE] Add option to enable RDP service for windows vm\n2099573 - [RFE] Improve template\u0027s message about not editable\n2099923 - [RFE] Merge \"SSH access\" and \"SSH command\" into one\n2100290 - Error is not dismissed on catalog review page\n2100436 - VM list filtering ignores VMs in error-states\n2100442 - [RFE] allow enabling and disabling SSH service while VM is shut down\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2100629 - Update nested support KBASE article\n2100679 - The number of hardware devices is not correct in vm overview tab\n2100682 - All hardware devices get deleted while just delete one\n2100684 - Workload profile are not editable during creation and after creation\n2101144 - VM filter has two \"Other\" checkboxes which are triggered together\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101167 - Edit buttons clickable area is too large. \n2101333 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2101335 - Clone action enabled in VM list kebab button for a VM in CrashLoopBackOff state\n2101390 - Easy to miss the \"tick\" when adding GPU device to vm via UI\n2101394 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2101423 - wrong user name on using ignition\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101445 - \"Pending changes - Boot Order\"\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101499 - Cannot add NIC to VM template as non-priv user\n2101501 - NAME parameter in VM template has no effect. \n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101667 - VMI view is not aligned with vm and tempates\n2101681 - All templates are labeling \"source available\" in template list page\n2102074 - VM Creation time on VM Overview Details card lacks string\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102132 - align the utilization card of single VM overview with the design\n2102138 - Should the word \"new\" be removed from \"Create new VirtualMachine from catalog\"?\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102475 - Template \u0027vm-template-example\u0027 should be filtered by \u0027Fedora\u0027 rather than \u0027Other\u0027\n2102561 - sysprep-info should link to downstream doc\n2102737 - Clone a VM should lead to vm overview tab\n2102740 - \"Save\" button on vm clone modal should be \"Clone\"\n2103806 - \"404: Not Found\" appears shortly by clicking the PVC link on vm disk tab\n2103807 - PVC is not named by VM name while creating vm quickly\n2103817 - Workload profile values in vm details should align with template\u0027s value\n2103844 - VM nic model is empty\n2104331 - VM list page scroll up automatically\n2104402 - VM create button is not enabled while adding multiple environment disks\n2104422 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2104424 - Enable descheduler or hide it on template\u0027s scheduling tab\n2104479 - [4.12] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2104480 - Alerts in VM overview tab disappeared after a few seconds\n2104785 - \"Add disk\" and \"Disks\" are on the same line\n2104859 - [RFE] Add \"Copy SSH command\" to VM action list\n2105257 - Can\u0027t set log verbosity level for virt-operator pod\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106963 - Cannot add configmap for windows VM\n2107279 - VM Template\u0027s bootable disk can be marked as bootable\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108339 - datasource does not provide timestamp when updated\n2108638 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2109818 - Upstream metrics documentation is not detailed enough\n2109975 - DataVolume fails to import \"cirros-container-disk-demo\" image\n2110256 - Storage -\u003e PVC -\u003e upload data, does not support source reference\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2111240 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2111292 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111328 - kubevirt plugin console crashed after visit vmi page\n2111378 - VM SSH command generated by UI points at api VIP\n2111744 - Cloned template should not label `app.kubernetes.io/name: common-templates`\n2111794 - the virtlogd process is taking too much RAM! (17468Ki \u003e 17Mi)\n2112900 - button style are different\n2114516 - Nothing happens after clicking on Fedora cloud image list link\n2114636 - The style of displayed items are not unified on VM tabs\n2114683 - VM overview tab is crashed just after the vm is created\n2115257 - Need to Change system-product-name to \"OpenShift Virtualization\" in CNV-4.12\n2115258 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2115280 - [e2e] kubevirt-e2e-aws see two duplicated navigation items\n2115769 - Machine type is updated to rhel8.6.0 in KV CR but not in Templates\n2116225 - The filter keyword of the related operator \u0027Openshift Data Foundation\u0027 is \u0027OCS\u0027 rather than \u0027ODF\u0027\n2116644 - Importer pod is failing to start with error \"MountVolume.SetUp failed for volume \"cdi-proxy-cert-vol\" : configmap \"custom-ca\" not found\"\n2117549 - Cannot edit cloud-init data after add ssh key\n2117803 - Cannot edit ssh even vm is stopped\n2117813 - Improve descriptive text of VM details while VM is off\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n2118257 - outdated doc link tolerations modal\n2118823 - Deprecated API 1.25 call: virt-cdi-controller/v0.0.0 (linux/amd64) kubernetes/$Format\n2119069 - Unable to start windows VMs on PSI setups\n2119128 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2119309 - readinessProbe in VM stays on failed\n2119615 - Change the disk size causes the unit changed\n2120907 - Cannot filter disks by label\n2121320 - Negative values in migration metrics\n2122236 - Failing to delete HCO with SSP sticking around\n2122990 - VMExport should check APIGroup\n2124147 - \"ReadOnlyMany\" should not be added to supported values in memory dump\n2124307 - Ui crash/stuck on loading when trying to detach disk on a VM\n2124528 - On upgrade, when live-migration is failed due to an infra issue, virt-handler continuously and endlessly tries to migrate it\n2124555 - View documentation link on MigrationPolicies page des not work\n2124557 - MigrationPolicy description is not displayed on Details page\n2124558 - Non-privileged user can start MigrationPolicy creation\n2124565 - Deleted DataSource reappears in list\n2124572 - First annotation can not be added to DataSource\n2124582 - Filtering VMs by OS does not work\n2124594 - Docker URL validation is inconsistent over application\n2124597 - Wrong case in Create DataSource menu\n2126104 - virtctl image-upload hangs waiting for pod to be ready with missing access mode defined in the storage profile\n2126397 - many KubeVirtComponentExceedsRequestedMemory alerts in Firing state\n2127787 - Expose the PVC source of the dataSource on UI\n2127843 - UI crashed by selecting \"Live migration network\"\n2127931 - Change default time range on Virtualization -\u003e Overview -\u003e Monitoring dashboard to 30 minutes\n2127947 - cluster-network-addons-config tlsSecurityProfle takes a long time to update after setting APIServer\n2128002 - Error after VM template deletion\n2128107 - sriov-manage command fails to enable SRIOV Virtual functions on the Ampere GPU Cards\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128948 - Cannot create DataSource from default YAML\n2128949 - Cannot create MigrationPolicy from example YAML\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129234 - Service is not deleted along with the VM when the VM is created from a template with service\n2129301 - Cloud-init network data don\u0027t wipe out on uncheck checkbox \u0027Add network data\u0027\n2129870 - crypto-policy : Accepting TLS 1.3 connections by validating webhook\n2130509 - Auto image import in failed state with data sources pointing to external manually-created PVC/DV\n2130588 - crypto-policy : Common Ciphers support by apiserver and hco\n2130695 - crypto-policy : Logging Improvement and publish the source of ciphers\n2130909 - Non-privileged user can start DataSource creation\n2131157 - KV data transfer rate chart in VM Metrics tab is not displayed\n2131165 - [dark mode] Additional statuses accordion on Virtualization Overview page not visible enough\n2131674 - Bump virtlogd memory requirement to 20Mi\n2132031 - Ensure Windows 2022 Templates are marked as TechPreview like it is done now for Windows 11\n2132682 - Default YAML entity name convention. \n2132721 - Delete dialogs\n2132744 - Description text is missing in Live Migrations section\n2132746 - Background is broken in Virtualization Monitoring page\n2132783 - VM can not be created from Template with edited boot source\n2132793 - Edited Template BSR is not saved\n2132932 - Typo in PVC size units menu\n2133540 - [pod security violation audit] Audit violation in \"cni-plugins\" container should be fixed\n2133541 - [pod security violation audit] Audit violation in \"bridge-marker\" container should be fixed\n2133542 - [pod security violation audit] Audit violation in \"manager\" container should be fixed\n2133543 - [pod security violation audit] Audit violation in \"kube-rbac-proxy\" container should be fixed\n2133655 - [pod security violation audit] Audit violation in \"cdi-operator\" container should be fixed\n2133656 - [4.12][pod security violation audit] Audit violation in \"hostpath-provisioner-operator\" container should be fixed\n2133659 - [pod security violation audit] Audit violation in \"cdi-controller\" container should be fixed\n2133660 - [pod security violation audit] Audit violation in \"cdi-source-update-poller\" container should be fixed\n2134123 - KubeVirtComponentExceedsRequestedMemory Alert for virt-handler pod\n2134672 - [e2e] add data-test-id for catalog -\u003e storage section\n2134825 - Authorization for expand-spec endpoint missing\n2135805 - Windows 2022 template is missing vTPM and UEFI params in spec\n2136051 - Name jumping when trying to create a VM with source from catalog\n2136425 - Windows 11 is detected as Windows 10\n2136534 - Not possible to specify a TTL on VMExports\n2137123 - VMExport: export pod is not PSA complaint\n2137241 - Checkbox about delete vm disks is not loaded while deleting VM\n2137243 - registery input add docker prefix twice\n2137349 - \"Manage source\" action infinitely loading on DataImportCron details page\n2137591 - Inconsistent dialog headings/titles\n2137731 - Link of VM status in overview is not working\n2137733 - No link for VMs in error status in \"VirtualMachine statuses\" card\n2137736 - The column name \"MigrationPolicy name\" can just be \"Name\"\n2137896 - crypto-policy: HCO should pick TLSProfile from apiserver if not provided explicitly\n2138112 - Unsupported S3 endpoint option in Add disk modal\n2138119 - \"Customize VirtualMachine\" flow is not user-friendly because settings are split into 2 modals\n2138199 - Win11 and Win22 templates are not filtered properly by Template provider\n2138653 - Saving Template prameters reloads the page\n2138657 - Setting DATA_SOURCE_* Template parameters makes VM creation fail\n2138664 - VM that was created with SSH key fails to start\n2139257 - Cannot add disk via \"Using an existing PVC\"\n2139260 - Clone button is disabled while VM is running\n2139293 - Non-admin user cannot load VM list page\n2139296 - Non-admin cannot load MigrationPolicies page\n2139299 - No auto-generated VM name while creating VM by non-admin user\n2139306 - Non-admin cannot create VM via customize mode\n2139479 - virtualization overview crashes for non-priv user\n2139574 - VM name gets \"emptyname\" if click the create button quickly\n2139651 - non-priv user can click create when have no permissions\n2139687 - catalog shows template list for non-priv users\n2139738 - [4.12]Can\u0027t restore cloned VM\n2139820 - non-priv user cant reach vm details\n2140117 - Provide upgrade path from 4.11.1-\u003e4.12.0\n2140521 - Click the breadcrumb list about \"VirtualMachines\" goes to undefined project\n2140534 - [View only] it should give a permission error when user clicking the VNC play/connect button as a view only user\n2140627 - Not able to select storageClass if there is no default storageclass defined\n2140730 - Links on Virtualization Overview page lead to wrong namespace for non-priv user\n2140808 - Hyperv feature set to \"enabled: false\" prevents scheduling\n2140977 - Alerts number is not correct on Virtualization overview\n2140982 - The base template of cloned template is \"Not available\"\n2140998 - Incorrect information shows in overview page per namespace\n2141089 - Unable to upload boot images. \n2141302 - Unhealthy states alerts and state metrics are missing\n2141399 - Unable to set TLS Security profile for CDI using HCO jsonpatch annotations\n2141494 - \"Start in pause mode\" option is not available while creating the VM\n2141654 - warning log appearing on VMs: found no SR-IOV networks\n2141711 - Node column selector is redundant for non-priv user\n2142468 - VM action \"Stop\" should not be disabled when VM in pause state\n2142470 - Delete a VM or template from all projects leads to 404 error\n2142511 - Enhance alerts card in overview\n2142647 - Error after MigrationPolicy deletion\n2142891 - VM latency checkup: Failed to create the checkup\u0027s Job\n2142929 - Permission denied when try get instancestypes\n2143268 - Topolvm storageProfile missing accessModes and volumeMode\n2143498 - Could not load template while creating VM from catalog\n2143964 - Could not load template while creating VM from catalog\n2144580 - \"?\" icon is too big in VM Template Disk tab\n2144828 - \"?\" icon is too big in VM Template Disk tab\n2144839 - Alerts number is not correct on Virtualization overview\n2153849 - After upgrade to 4.11.1-\u003e4.12.0 hco.spec.workloadUpdateStrategy value is getting overwritten\n2155757 - Incorrect upstream-version label \"v1.6.0-unstable-410-g09ea881c\" is tagged to 4.12 hyperconverged-cluster-operator-container and hyperconverged-cluster-webhook-container\n\n5. Description:\n\nThe rh-sso-7/sso76-openshift-rhel8 container image and\nrh-sso-7/sso7-rhel8-operator operator has been updated for RHEL-8 based\nMiddleware Containers to address the following security issues. Users of these images\nare also encouraged to rebuild all container images that depend on these\nimages. \n\nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2138971 - CVE-2022-3782 keycloak: path traversal via double URL encoding\n2141404 - CVE-2022-3916 keycloak: Session takeover with OIDC offline refreshtokens\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nCIAM-4412 - Build new OCP image for rh-sso-7/sso76-openshift-rhel8\nCIAM-4413 - Generate new operator bundle image for this patch\n\n6. Summary:\n\nAn update is now available for Migration Toolkit for Runtimes (v1.0.1). Bugs fixed (https://bugzilla.redhat.com/):\n\n2142707 - CVE-2022-42920 Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2148199 - CVE-2022-39278 Istio: Denial of service attack via a specially crafted message\n2148661 - CVE-2022-3962 kiali: error message spoofing in kiali UI\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOSSM-1977 - Support for Istio Gateway API in Kiali\nOSSM-2083 - Update maistra/istio 2.3 to Istio 1.14.5\nOSSM-2147 - Unexpected validation message on Gateway object\nOSSM-2169 - Member controller doesn\u0027t retry on conflict\nOSSM-2170 - Member namespaces aren\u0027t cleaned up when a cluster-scoped SMMR is deleted\nOSSM-2179 - Wasm plugins only support OCI images with 1 layer\nOSSM-2184 - Istiod isn\u0027t allowed to delete analysis distribution report configmap\nOSSM-2188 - Member namespaces not cleaned up when SMCP is deleted\nOSSM-2189 - If multiple SMCPs exist in a namespace, the controller reconciles them all\nOSSM-2190 - The memberroll controller reconciles SMMRs with invalid name\nOSSM-2232 - The member controller reconciles ServiceMeshMember with invalid name\nOSSM-2241 - Remove v2.0 from Create ServiceMeshControlPlane Form\nOSSM-2251 - CVE-2022-3962 openshift-istio-kiali-container: kiali: content spoofing [ossm-2.3]\nOSSM-2308 - add root CA certificates to kiali container\nOSSM-2315 - be able to customize openshift auth timeouts\nOSSM-2324 - Gateway injection does not work when pods are created by cluster admins\nOSSM-2335 - Potential hang using Traces scatterplot chart\nOSSM-2338 - Federation deployment does not need router mode sni-dnat\nOSSM-2344 - Restarting istiod causes Kiali to flood CRI-O with port-forward requests\nOSSM-2375 - Istiod should log member namespaces on every update\nOSSM-2376 - ServiceMesh federation stops working after the restart of istiod pod\nOSSM-535 - Support validationMessages in SMCP\nOSSM-827 - ServiceMeshMembers point to wrong SMCP name\n\n6. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.3 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. Bugs fixed (https://bugzilla.redhat.com/):\n\n2129679 - clusters belong to global clusterset is not selected by placement when rescheduling\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2139085 - RHACM 2.6.3 images\n2149181 - CVE-2022-41912 crewjam/saml: Authentication bypass when processing SAML responses containing multiple Assertion elements\n\n5. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment. \n\n\n\n\nDescription:\n\nThe rsync utility enables the users to copy and synchronize files locally or across a network. Synchronization with rsync is fast because rsync only sends the differences in files over the network instead of sending whole files. The rsync utility is also used as a mirroring tool", "sources": [ { "db": "NVD", "id": "CVE-2022-37434" }, { "db": "VULHUB", "id": "VHN-428208" }, { "db": "PACKETSTORM", "id": "173605" }, { "db": "PACKETSTORM", "id": "173107" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "170898" }, { "db": "PACKETSTORM", "id": "170741" }, { "db": "PACKETSTORM", "id": "170210" }, { "db": "PACKETSTORM", "id": "170759" }, { "db": "PACKETSTORM", "id": "170806" }, { "db": "PACKETSTORM", "id": "170242" }, { "db": "PACKETSTORM", "id": "176559" } ], "trust": 1.98 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-428208", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-428208" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-37434", "trust": 2.2 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/08/05/2", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2022/08/09/1", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "169707", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170027", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169503", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171271", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169726", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169624", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168107", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169566", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169906", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169783", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169557", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168113", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169577", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168765", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169595", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-428208", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "173605", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "173107", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170083", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170179", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170898", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170741", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170210", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170759", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170806", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170242", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "176559", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-428208" }, { "db": "PACKETSTORM", "id": "173605" }, { "db": "PACKETSTORM", "id": "173107" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "170898" }, { "db": "PACKETSTORM", "id": "170741" }, { "db": "PACKETSTORM", "id": "170210" }, { "db": "PACKETSTORM", "id": "170759" }, { "db": "PACKETSTORM", "id": "170806" }, { "db": "PACKETSTORM", "id": "170242" }, { "db": "PACKETSTORM", "id": "176559" }, { "db": "NVD", "id": "CVE-2022-37434" } ] }, "id": "VAR-202208-0404", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-428208" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T21:15:51.322000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-787", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-428208" }, { "db": "NVD", "id": "CVE-2022-37434" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/37" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/38" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/41" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2022/oct/42" }, { "trust": 1.1, "url": "https://www.debian.org/security/2022/dsa-5218" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/pavpqncg3xrlclnsqrm3kan5zfmvxvty/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nmboj77a7t7pqcarmduk75te6llesz3o/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/yrqai7h4m4rqz2iwzueexecbe5d56bh2/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/x5u7otkzshy2i3zfjsr2shfhw72rkgdk/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jwn4ve3jqr4o2sous5txnlanrpmhwv4i/" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/09/msg00012.html" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2022/08/05/2" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2022/08/09/1" }, { "trust": 1.1, "url": "https://github.com/curl/curl/issues/9271" }, { "trust": 1.1, "url": "https://github.com/ivd38/zlib_overflow" }, { "trust": 1.1, "url": "https://github.com/madler/zlib/blob/21767c654d31d2dccdde4330529775c6c5fd5389/zlib.h#l1062-l1063" }, { "trust": 1.1, "url": "https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166bece1" }, { "trust": 1.1, "url": "https://github.com/nodejs/node/blob/75b68c6e4db515f76df73af476eccf382bbcb00a/deps/zlib/inflate.c#l762-l764" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220901-0005/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213488" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213489" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213490" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213491" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213493" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht213494" }, { "trust": 1.0, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 1.0, "url": "https://bugzilla.redhat.com/):" }, { "trust": 1.0, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2022-37434" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20230427-0007/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-42898" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.7, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-35525" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-35527" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-3515" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-27404" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-27406" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-27405" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-34903" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-42012" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-42010" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-42011" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-40674" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-35737" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-46848" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://issues.jboss.org/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-30635" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-41715" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-2880" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-43680" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-27664" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2015-20107" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25309" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-30698" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-30699" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25310" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25308" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0924" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0908" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0562" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22844" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0865" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0909" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0561" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0891" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1355" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-47629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-38177" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-0361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-38178" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-24329" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4238" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2879" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3821" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-40303" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-40304" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32189" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-41717" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-0308" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0256" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24795" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0391" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0934" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24448" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2639" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1055" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26373" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-20368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1048" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0617" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0854" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29581" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1016" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2078" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2938" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21499" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-36946" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36558" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1852" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0168" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28390" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27950" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23960" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3640" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-30002" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1184" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25255" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36516" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28893" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3787" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30632" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28131" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30633" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1705" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30630" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32148" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0891" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0908" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0215" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-1281" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://registry.centos.org/v2/\":" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:4053" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://issues.redhat.com/):" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-32233" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhba-2023:4052" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23540" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41316" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2795" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-48303" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36227" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-45873" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-2491" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43998" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41724" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44531" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41725" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38149" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28805" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26280" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-48337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43519" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1587" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4415" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-45061" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28861" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0620" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:3742" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43519" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-24999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-25000" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-22809" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40023" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-47024" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16251" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28861" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3924" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44533" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-46175" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44964" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3736" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17049" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3715" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43998" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-0665" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1348" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-48338" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42919" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16251" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-33099" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-48339" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-46828" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3765" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41723" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-17049" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3094" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8750" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-39399" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42003" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21626" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36518" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2601" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3775" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2601" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/add-ons/submariner#deploying-submariner-console" }, { "trust": 0.1, "url": "https://submariner.io/." }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41974" }, { "trust": 0.1, "url": "https://submariner.io/getting-started/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0631" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0408" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1798" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27404" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3916" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27405" }, { "trust": 0.1, "url": "https://catalog.redhat.com/software/containers/registry/registry.access.redhat.com/repository/rh-sso-7/sso76-openshift-rhel8" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8964" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1471" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42920" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0924" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0470" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1355" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1471" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-39278" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21713" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0542" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21713" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21703" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21703" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21702" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3962" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21702" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41912" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:9040" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0254.json" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2116639" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2024:0254" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-37434" } ], "sources": [ { "db": "VULHUB", "id": "VHN-428208" }, { "db": "PACKETSTORM", "id": "173605" }, { "db": "PACKETSTORM", "id": "173107" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "170898" }, { "db": "PACKETSTORM", "id": "170741" }, { "db": "PACKETSTORM", "id": "170210" }, { "db": "PACKETSTORM", "id": "170759" }, { "db": "PACKETSTORM", "id": "170806" }, { "db": "PACKETSTORM", "id": "170242" }, { "db": "PACKETSTORM", "id": "176559" }, { "db": "NVD", "id": "CVE-2022-37434" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-428208" }, { "db": "PACKETSTORM", "id": "173605" }, { "db": "PACKETSTORM", "id": "173107" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170179" }, { "db": "PACKETSTORM", "id": "170898" }, { "db": "PACKETSTORM", "id": "170741" }, { "db": "PACKETSTORM", "id": "170210" }, { "db": "PACKETSTORM", "id": "170759" }, { "db": "PACKETSTORM", "id": "170806" }, { "db": "PACKETSTORM", "id": "170242" }, { "db": "PACKETSTORM", "id": "176559" }, { "db": "NVD", "id": "CVE-2022-37434" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-08-05T00:00:00", "db": "VULHUB", "id": "VHN-428208" }, { "date": "2023-07-19T15:37:11", "db": "PACKETSTORM", "id": "173605" }, { "date": "2023-06-23T14:56:34", "db": "PACKETSTORM", "id": "173107" }, { "date": "2022-12-02T15:57:08", "db": "PACKETSTORM", "id": "170083" }, { "date": "2022-12-09T14:52:40", "db": "PACKETSTORM", "id": "170179" }, { "date": "2023-02-08T16:00:47", "db": "PACKETSTORM", "id": "170898" }, { "date": "2023-01-26T15:29:09", "db": "PACKETSTORM", "id": "170741" }, { "date": "2022-12-13T17:16:20", "db": "PACKETSTORM", "id": "170210" }, { "date": "2023-01-27T15:03:38", "db": "PACKETSTORM", "id": "170759" }, { "date": "2023-01-31T17:11:04", "db": "PACKETSTORM", "id": "170806" }, { "date": "2022-12-15T15:34:35", "db": "PACKETSTORM", "id": "170242" }, { "date": "2024-01-16T13:46:07", "db": "PACKETSTORM", "id": "176559" }, { "date": "2022-08-05T07:15:07.240000", "db": "NVD", "id": "CVE-2022-37434" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-428208" }, { "date": "2023-07-19T00:56:46.373000", "db": "NVD", "id": "CVE-2022-37434" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "173107" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2023-4053-01", "sources": [ { "db": "PACKETSTORM", "id": "173605" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution", "sources": [ { "db": "PACKETSTORM", "id": "173605" } ], "trust": 0.1 } }
var-202006-0222
Vulnerability from variot
libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-02-01-1 macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave
macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212147.
Analytics Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2021-1761: Cees Elzinga
APFS Available for: macOS Big Sur 11.0.1 Impact: A local user may be able to read arbitrary files Description: The issue was addressed with improved permissions logic. CVE-2021-1797: Thomas Tempelmann
CFNetwork Cache Available for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An integer overflow was addressed with improved input validation. CVE-2020-27945: Zhuo Liang of Qihoo 360 Vulcan Team
CoreAnimation Available for: macOS Big Sur 11.0.1 Impact: A malicious application could execute arbitrary code leading to compromise of user information Description: A memory corruption issue was addressed with improved state management. CVE-2021-1760: @S0rryMybad of 360 Vulcan Team
CoreAudio Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1747: JunDong Xie of Ant Security Light-Year Lab
CoreGraphics Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted font file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-1776: Ivan Fratric of Google Project Zero
CoreMedia Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1759: Hou JingYi (@hjy79425575) of Qihoo 360 CERT
CoreText Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted text file may lead to arbitrary code execution Description: A stack overflow was addressed with improved input validation. CVE-2021-1772: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
CoreText Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1792: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Crash Reporter Available for: macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2021-1761: Cees Elzinga
Crash Reporter Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A local attacker may be able to elevate their privileges Description: Multiple issues were addressed with improved logic. CVE-2021-1787: James Hutchins
Crash Reporter Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A local user may be able to create or modify system files Description: A logic issue was addressed with improved state management. CVE-2021-1786: Csaba Fitzl (@theevilbit) of Offensive Security
Directory Utility Available for: macOS Catalina 10.15.7 Impact: A malicious application may be able to access private information Description: A logic issue was addressed with improved state management. CVE-2020-27937: Wojciech Reguła (@_r3ggi) of SecuRing
Endpoint Security Available for: macOS Catalina 10.15.7 Impact: A local attacker may be able to elevate their privileges Description: A logic issue was addressed with improved state management. CVE-2021-1802: Zhongcheng Li (@CK01) from WPS Security Response Center
FairPlay Available for: macOS Big Sur 11.0.1 Impact: A malicious application may be able to disclose kernel memory Description: An out-of-bounds read issue existed that led to the disclosure of kernel memory. This was addressed with improved input validation. CVE-2021-1791: Junzhi Lu (@pwn0rz), Qi Sun & Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
FontParser Available for: macOS Catalina 10.15.7 Impact: Processing a maliciously crafted font may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1790: Peter Nguyen Vu Hoang of STAR Labs
FontParser Available for: macOS Mojave 10.14.6 Impact: Processing a maliciously crafted font may lead to arbitrary code execution Description: This issue was addressed by removing the vulnerable code. CVE-2021-1775: Mickey Jin and Qi Sun of Trend Micro
FontParser Available for: macOS Mojave 10.14.6 Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-29608: Xingwei Lin of Ant Security Light-Year Lab
FontParser Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1758: Peter Nguyen of STAR Labs
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An access issue was addressed with improved memory management. CVE-2021-1783: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1741: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1743: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative, Xingwei Lin of Ant Security Light- Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to a denial of service Description: A logic issue was addressed with improved state management. CVE-2021-1773: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted image may lead to a denial of service Description: An out-of-bounds read issue existed in the curl. This issue was addressed with improved bounds checking. CVE-2021-1778: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1736: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1785: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to a denial of service Description: This issue was addressed with improved checks. CVE-2021-1766: Danny Rosseau of Carve Systems
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-1818: Xingwei Lin from Ant-Financial Light-Year Security Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-1742: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1746: Mickey Jin & Qi Sun of Trend Micro, Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1754: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1774: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1777: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1793: Xingwei Lin of Ant Security Light-Year Lab
ImageIO Available for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1737: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-1738: Lei Sun CVE-2021-1744: Xingwei Lin of Ant Security Light-Year Lab
IOKit Available for: macOS Big Sur 11.0.1 Impact: An application may be able to execute arbitrary code with system privileges Description: A logic error in kext loading was addressed with improved state handling. CVE-2021-1779: Csaba Fitzl (@theevilbit) of Offensive Security
IOSkywalkFamily Available for: macOS Big Sur 11.0.1 Impact: A local attacker may be able to elevate their privileges Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1757: Pan ZhenPeng (@Peterpan0927) of Alibaba Security, Proteas
Kernel Available for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6 Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue existed resulting in memory corruption. This was addressed with improved state management. CVE-2020-27904: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong Security Lab
Kernel Available for: macOS Big Sur 11.0.1 Impact: A remote attacker may be able to cause a denial of service Description: A use after free issue was addressed with improved memory management. CVE-2021-1764: @m00nbsd
Kernel Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited. Description: A race condition was addressed with improved locking. CVE-2021-1782: an anonymous researcher
Kernel Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: An application may be able to execute arbitrary code with kernel privileges Description: Multiple issues were addressed with improved logic. CVE-2021-1750: @0xalsr
Login Window Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: An attacker in a privileged network position may be able to bypass authentication policy Description: An authentication issue was addressed with improved state management. CVE-2020-29633: Jewel Lambert of Original Spin, LLC.
Messages Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A user that is removed from an iMessage group could rejoin the group Description: This issue was addressed with improved checks. CVE-2021-1771: Shreyas Ranganatha (@strawsnoceans)
Model I/O Available for: macOS Big Sur 11.0.1 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-1762: Mickey Jin of Trend Micro
Model I/O Available for: macOS Catalina 10.15.7 Impact: Processing a maliciously crafted file may lead to heap corruption Description: This issue was addressed with improved checks. CVE-2020-29614: ZhiWei Sun (@5n1p3r0010) from Topsec Alpha Lab
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2021-1763: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to heap corruption Description: This issue was addressed with improved checks. CVE-2021-1767: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-1745: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1753: Mickey Jin of Trend Micro working with Trend Micro’s Zero Day Initiative
Model I/O Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-1768: Mickey Jin & Junzhi Lu of Trend Micro working with Trend Micro’s Zero Day Initiative
NetFSFramework Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: Mounting a maliciously crafted Samba network share may lead to arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-1751: Mikko Kenttälä (@Turmio_) of SensorFu
OpenLDAP Available for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and macOS Mojave 10.14.6 Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2020-25709
Power Management Available for: macOS Mojave 10.14.6, macOS Catalina 10.15.7 Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2020-27938: Tim Michaud (@TimGMichaud) of Leviathan
Screen Sharing Available for: macOS Big Sur 11.0.1 Impact: Multiple issues in pcre Description: Multiple issues were addressed by updating to version 8.44. CVE-2019-20838 CVE-2020-14155
SQLite Available for: macOS Catalina 10.15.7 Impact: Multiple issues in SQLite Description: Multiple issues were addressed by updating SQLite to version 3.32.3. CVE-2020-15358
Swift Available for: macOS Big Sur 11.0.1 Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: A logic issue was addressed with improved validation. CVE-2021-1769: CodeColorist of Ant-Financial Light-Year Labs
WebKit Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-1788: Francisco Alonso (@revskills)
WebKit Available for: macOS Big Sur 11.0.1 Impact: Maliciously crafted web content may violate iframe sandboxing policy Description: This issue was addressed with improved iframe sandbox enforcement. CVE-2021-1765: Eliya Stein of Confiant CVE-2021-1801: Eliya Stein of Confiant
WebKit Available for: macOS Big Sur 11.0.1 Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-1789: @S0rryMybad of 360 Vulcan Team
WebKit Available for: macOS Big Sur 11.0.1 Impact: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. Description: A logic issue was addressed with improved restrictions. CVE-2021-1871: an anonymous researcher CVE-2021-1870: an anonymous researcher
WebRTC Available for: macOS Big Sur 11.0.1 Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A port redirection issue was addressed with additional port validation. CVE-2021-1799: Gregory Vishnepolsky & Ben Seri of Armis Security, and Samy Kamkar
Additional recognition
Kernel We would like to acknowledge Junzhi Lu (@pwn0rz), Mickey Jin & Jesse Change of Trend Micro for their assistance.
libpthread We would like to acknowledge CodeColorist of Ant-Financial Light-Year Labs for their assistance.
Login Window We would like to acknowledge Jose Moises Romero-Villanueva of CrySolve for their assistance.
Mail Drafts We would like to acknowledge Jon Bottarini of HackerOne for their assistance.
Screen Sharing Server We would like to acknowledge @gorelics for their assistance.
WebRTC We would like to acknowledge Philipp Hancke for their assistance.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmAYgrkACgkQZcsbuWJ6 jjATvhAAmcspGY8ZHJcSUGr9mysz5iT9oGkZcvFa8kcJsFAvFb9Wjz0M2eovBXQc D9bD7LrUpodiqkSobB4bEevpD9P8E/T/eRSBxjomKLv5DKHPT4eh/K2EU6R6ubVi GGNlT9DJrIxcTJIB2y/yfs8msV2w2/gZDLKJZP4Zh6t8G1sjI17iEaxpOph67aq2 X0d+P7+7q1mUBa47JEQ+HIUNlfHtBL825cnmHD2Vn1WELQLKZfXBl+nPM9l9naRc 3vYIvR7xJ5c4bqFx7N9xwGdQ5TRIoDijqADwggGwOZEiVZ7PWifj/iCLUz4Ks4hr oGVE1UxN1oSX63D44ZQyfiyIWIiMtDV9V4J6mUoUnZ6RTTMoRRAF9DcSVF5/wmHk odYnMeouHc543ZyVBtdtwJ/tbuBvTOjzpNn0+UgiyRL9wG/xxQq+gB4vwgSEviek bBhyvdxLVWW0ULwFeN5rI5bCQBkv6BB9OSyhD6sMRrp59NAgBBS2nstZG1RAt7XL 2KZ1GpoNcuDRLj7ElxAfeJuPM1dFVTK48SH56M1FElz/QowZVOXyKgUoaeVTUyAC 3WOACmFAosFIclCbr8z8yGynX2bsCGBNKv4pKoHlyZCyFHCQw9L6uR2gRkOp86+M iqHtE2L1WUZvUMCIKxfdixILEfoacSVCxr3+v4SSDOcEbSDYEIA= =mUkG -----END PGP SIGNATURE-----
. Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API.
Security Fix(es):
-
nodejs-immer: prototype pollution may lead to DoS or remote code execution (CVE-2021-3757)
-
mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- Description:
This release adds the new Apache HTTP Server 2.4.37 Service Pack 10 packages that are part of the JBoss Core Services offering. Refer to the Release Notes for information on the most significant bug fixes and enhancements included in this release.
Security Fix(es):
- httpd: Single zero byte stack overflow in mod_auth_digest (CVE-2020-35452)
- httpd: mod_session NULL pointer dereference in parser (CVE-2021-26690)
- httpd: Heap overflow in mod_session (CVE-2021-26691)
- httpd: mod_proxy_wstunnel tunneling of non Upgraded connection (CVE-2019-17567)
- httpd: MergeSlashes regression (CVE-2021-30641)
- httpd: mod_proxy NULL pointer dereference (CVE-2020-13950)
- jbcs-httpd24-openssl: openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
- openssl: Read buffer overruns processing ASN.1 strings (CVE-2021-3712)
- openssl: integer overflow in CipherUpdate (CVE-2021-23840)
- pcre: buffer over-read in JIT when UTF is disabled (CVE-2019-20838)
- pcre: integer overflow in libpcre (CVE-2020-14155)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments 1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \X or \R has fixed quantifier greater than 1 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1966724 - CVE-2020-35452 httpd: Single zero byte stack overflow in mod_auth_digest 1966729 - CVE-2021-26690 httpd: mod_session: NULL pointer dereference when parsing Cookie header 1966732 - CVE-2021-26691 httpd: mod_session: Heap overflow via a crafted SessionHeader value 1966738 - CVE-2020-13950 httpd: mod_proxy NULL pointer dereference 1966740 - CVE-2019-17567 httpd: mod_proxy_wstunnel tunneling of non Upgraded connection 1966743 - CVE-2021-30641 httpd: Unexpected URL matching with 'MergeSlashes OFF' 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
Bugs fixed (https://bugzilla.redhat.com/):
1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
- JIRA issues fixed (https://issues.jboss.org/):
TRACING-2235 - Release RHOSDT 2.1
- ========================================================================== Ubuntu Security Notice USN-5425-1 May 17, 2022
pcre3 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
Several security issues were fixed in PCRE.
Software Description: - pcre3: Perl 5 Compatible Regular Expression Library
Details:
Yunho Kim discovered that PCRE incorrectly handled memory when handling certain regular expressions. An attacker could possibly use this issue to cause applications using PCRE to expose sensitive information. This issue only affects Ubuntu 18.04 LTS, Ubuntu 20.04 LTS, Ubuntu 21.10 and Ubuntu 22.04 LTS. (CVE-2019-20838)
It was discovered that PCRE incorrectly handled memory when handling certain regular expressions. An attacker could possibly use this issue to cause applications using PCRE to have unexpected behavior. This issue only affects Ubuntu 14.04 ESM, Ubuntu 16.04 ESM, Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. (CVE-2020-14155)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libpcre3 2:8.39-13ubuntu0.22.04.1
Ubuntu 21.10: libpcre3 2:8.39-13ubuntu0.21.10.1
Ubuntu 20.04 LTS: libpcre3 2:8.39-12ubuntu0.1
Ubuntu 18.04 LTS: libpcre3 2:8.39-9ubuntu0.1
Ubuntu 16.04 ESM: libpcre3 2:8.38-3.1ubuntu0.1~esm1
Ubuntu 14.04 ESM: libpcre3 1:8.31-2ubuntu2.3+esm1
After a standard system update you need to restart applications using PCRE, such as the Apache HTTP server and Nginx, to make all the necessary changes. Summary:
Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.11.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5068
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Security Fix(es):
- go-getter: command injection vulnerability (CVE-2022-26945)
- go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
- go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
- go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
- nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
- sanitize-url: XSS (CVE-2021-23648)
- minimist: prototype pollution (CVE-2021-44906)
- node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
- prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
- golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
- go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
- opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64
The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64
The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x
The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le
The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts
1822752 - cluster-version operator stops applying manifests when blocked by a precondition check
1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image
1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV
1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name
1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource
1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group
1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready
1905850 - oc adm policy who-can
failed to check the operatorcondition/status
resource
1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)
1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource
1918005 - [vsphere] If there are multiple port groups with the same name installation fails
1918417 - IPv6 errors after exiting crictl
1918690 - Should update the KCM resource-graph timely with the latest configure
1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok"
1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded
1923536 - Image pullthrough does not pass 429 errors back to capable clients
1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API
1932812 - Installer uses the terraform-provider in the Installer's directory if it exists
1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value
1943937 - CatalogSource incorrect parsing validation
1944264 - [ovn] CNO should gracefully terminate OVN databases
1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2
1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled
1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV
1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x
1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
1957668 - oc login does not show link to console
1958198 - authentication operator takes too long to pick up a configuration change
1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true
1961233 - Add CI test coverage for DNS availability during upgrades
1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects
1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata
1965934 - can not get new result with "Refresh off" if click "Run queries" again
1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone.
1968253 - GCP CSI driver can provision volume with access mode ROX
1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones
1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases
1976111 - [tracker] multipathd.socket is missing start conditions
1976782 - Openshift registry starts to segfault after S3 storage configuration
1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory"
1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"]
1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8
1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
1982737 - OLM does not warn on invalid CSV
1983056 - IP conflict while recreating Pod with fixed name
1984785 - LSO CSV does not contain disconnected annotation
1989610 - Unsupported data types should not be rendered on operand details page
1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager
1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines
1994117 - Some hardcodes are detected at the code level in orphaned code
1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs
1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods
1996544 - AWS region ap-northeast-3 is missing in installer prompt
1996638 - Helm operator manager container restart when CR is creating&deleting
1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace
1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow
1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc
1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered
1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource
1999891 - must-gather collects backup data even when Pods fails to be created
2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap
2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks
2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap
2002868 - Node exporter not able to scrape OVS metrics
2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet
2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO
2006067 - Objects are not valid as a React child
2006201 - ovirt-csi-driver-node pods are crashing intermittently
2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment
2007340 - Accessibility issues on topology - list view
2007611 - TLS issues with the internal registry and AWS S3 bucket
2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge
2008486 - Double scroll bar shows up on dragging the task quick search to the bottom
2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19
2009352 - Add image-registry usage metrics to telemeter
2009845 - Respect overrides changes during installation
2010361 - OpenShift Alerting Rules Style-Guide Compliance
2010364 - OpenShift Alerting Rules Style-Guide Compliance
2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS
2011895 - Details about cloud errors are missing from PV/PVC errors
2012111 - LSO still try to find localvolumeset which is already deleted
2012969 - need to figure out why osupdatedstart to reboot is zero seconds
2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)
2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user
2013734 - unable to label downloads route in openshift-console namespace
2013822 - ensure that the container-tools
content comes from the RHAOS plashets
2014161 - PipelineRun logs are delayed and stuck on a high log volume
2014240 - Image registry uses ICSPs only when source exactly matches image
2014420 - Topology page is crashed
2014640 - Cannot change storage class of boot disk when cloning from template
2015023 - Operator objects are re-created even after deleting it
2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance
2015356 - Different status shows on VM list page and details page
2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types
2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff
2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource
2016534 - externalIP does not work when egressIP is also present
2017001 - Topology context menu for Serverless components always open downwards
2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs
2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI
2019532 - Logger object in LSO does not log source location accurately
2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted
2020483 - Parameter $auto_interval_period is in Period drop-down list
2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working
2021041 - [vsphere] Not found TagCategory when destroying ipi cluster
2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible
2022253 - Web terminal view is broken
2022507 - Pods stuck in OutOfpods state after running cluster-density
2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2022745 - Cluster reader is not able to list NodeNetwork objects
2023295 - Must-gather tool gathering data from custom namespaces.
2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes
2024427 - oc completion zsh doesn't auto complete
2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" )
2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation
2026356 - [IPI on Azure] The bootstrap machine type should be same as master
2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted
2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2027613 - Users can't silence alerts from the dev console
2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition
2028532 - noobaa-pg-db-0 pod stuck in Init:0/2
2028821 - Misspelled label in ODF management UI - MCG performance view
2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf
2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision
2029797 - Uncaught exception: ResizeObserver loop limit exceeded
2029835 - CSI migration for vSphere: Inline-volume tests failing
2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host
2030530 - VM created via customize wizard has single quotation marks surrounding its password
2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled
2030776 - e2e-operator always uses quay master images during presubmit tests
2032559 - CNO allows migration to dual-stack in unsupported configurations
2032717 - Unable to download ignition after coreos-installer install --copy-network
2032924 - PVs are not being cleaned up after PVC deletion
2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation
2033575 - monitoring targets are down after the cluster run for more than 1 day
2033711 - IBM VPC operator needs e2e csi tests for ibmcloud
2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address
2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4
2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37
2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save
2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated
2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
2035005 - MCD is not always removing in progress taint after a successful update
2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks
2035899 - Operator-sdk run bundle doesn't support arm64 env
2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work
2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd
2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default
2037447 - Ingress Operator is not closing TCP connections.
2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found
2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height
2037610 - typo for the Terminated message from thanos-querier pod description info
2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10
2037625 - AppliedClusterResourceQuotas can not be shown on project overview
2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption
2037628 - Add test id to kms flows for automation
2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster
2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied
2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack
2038115 - Namespace and application bar is not sticky anymore
2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations
2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken
2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group
2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image
2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2039253 - ovnkube-node crashes on duplicate endpoints
2039256 - Domain validation fails when TLD contains a digit.
2039277 - Topology list view items are not highlighted on keyboard navigation
2039462 - Application tab in User Preferences dropdown menus are too wide.
2039477 - validation icon is missing from Import from git
2039589 - The toolbox command always ignores [command] the first time
2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project
2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column
2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names
2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong
2040488 - OpenShift-Ansible BYOH Unit Tests are Broken
2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard
2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits
2040779 - Nodeport svc not accessible when the backend pod is on a window node
2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes
2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted'
2041454 - Garbage values accepted for --reference-policy
in oc import-image
without any error
2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work
2041769 - Pipeline Metrics page not showing data for normal user
2041774 - Failing git detection should not recommend Devfiles as import strategy
2041814 - The KubeletConfigController wrongly process multiple confs for a pool
2041940 - Namespace pre-population not happening till a Pod is created
2042027 - Incorrect feedback for "oc label pods --all"
2042348 - Volume ID is missing in output message when expanding volume which is not mounted.
2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15
2042501 - use lease for leader election
2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps
2042652 - Unable to deploy hw-event-proxy operator
2042838 - The status of container is not consistent on Container details and pod details page
2042852 - Topology toolbars are unaligned to other toolbars
2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP
2043035 - Wrong error code provided when request contains invalid argument
2043068 - available of text disappears in Utilization item if x is 0
2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist
2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away
2043118 - Host should transition through Preparing when HostFirmwareSettings changed
2043132 - Add a metric when vsphere csi storageclass creation fails
2043314 - oc debug node
does not meet compliance requirement
2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining
2043428 - Address Alibaba CSI driver operator review comments
2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release
2043672 - [MAPO] root volumes not working
2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade
2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method
2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails
2044412 - Topology list misses separator lines and hover effect let the list jump 1px
2044421 - Topology list does not allow selecting an application group anymore
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2044803 - Unify button text style on VM tabs
2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2045065 - Scheduled pod has nodeName changed
2045073 - Bump golang and build images for local-storage-operator
2045087 - Failed to apply sriov policy on intel nics
2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade
2045559 - API_VIP moved when kube-api container on another master node was stopped
2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation
2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2046133 - [MAPO]IPI proxy installation failed
2046156 - Network policy: preview of affected pods for non-admin shows empty popup
2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config
2046191 - Opeartor pod is missing correct qosClass and priorityClass
2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource
2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob".
2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow
2046496 - Awkward wrapping of project toolbar on mobile
2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests
2046498 - "All Projects" and "all applications" use different casing on topology page
2046591 - Auto-update boot source is not available while create new template from it
2046594 - "Requested template could not be found" while creating VM from user-created template
2046598 - Auto-update boot source size unit is byte on customize wizard
2046601 - Cannot create VM from template
2046618 - Start last run action should contain current user name in the started-by annotation of the PLR
2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator
2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module
2047257 - [CP MIGRATION] Node drain failure during control plane node migration
2047277 - Storage status is missing from status card of virtualization overview
2047308 - Remove metrics and events for master port offsets
2047310 - Running VMs per template card needs empty state when no VMs exist
2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047362 - Removing prometheus UI access breaks origin test
2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message.
2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8
2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error
2047732 - [IBM]Volume is not deleted after destroy cluster
2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource
2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9
2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController
2047895 - Fix architecture naming in oc adm release mirror for aarch64
2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters
2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot
2047935 - [4.11] Bootimage bump tracker
2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-
2048059 - Service Level Agreement (SLA) always show 'Unknown'
2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false
2048186 - Image registry operator panics when finalizes config deletion
2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2048221 - Capitalization of titles in the VM details page is inconsistent.
2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI.
2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh
2048333 - prometheus-adapter becomes inaccessible during rollout
2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable
2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption
2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy
2048538 - Network policies are not implemented or updated by OVN-Kubernetes
2048541 - incorrect rbac check for install operator quick starts
2048563 - Leader election conventions for cluster topology
2048575 - IP reconciler cron job failing on single node
2048686 - Check MAC address provided on the install-config.yaml file
2048687 - All bare metal jobs are failing now due to End of Life of centos 8
2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr
2048803 - CRI-O seccomp profile out of date
2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added
2048955 - Alibaba Disk CSI Driver does not have CI
2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2049078 - Bond CNI: Failed to attach Bond NAD to pod
2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available'
2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2049142 - Missing "app" label
2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured
2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2049410 - external-dns-operator creates provider section, even when not requested
2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2049613 - MTU migration on SDN IPv4 causes API alerts
2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist
2049687 - superfluous apirequestcount entries in audit log
2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled
2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges
2049889 - oc new-app --search nodejs warns about access to sample content on quay.io
2050005 - Plugin module IDs can clash with console module IDs causing runtime errors
2050011 - Observe > Metrics page: Timespan text input and dropdown do not align
2050120 - Missing metrics in kube-state-metrics
2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050300 - panic in cluster-storage-operator while updating status
2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims
2050335 - azure-disk failed to mount with error special device does not exist
2050345 - alert data for burn budget needs to be updated to prevent regression
2050407 - revert "force cert rotation every couple days for development" in 4.11
2050409 - ip-reconcile job is failing consistently
2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest
2050466 - machine config update with invalid container runtime config should be more robust
2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour
2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes
2050707 - up test for prometheus pod look to far in the past
2050767 - Vsphere upi tries to access vsphere during manifests generation phase
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050882 - Crio appears to be coredumping in some scenarios
2050902 - not all resources created during import have common labels
2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error
2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11
2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted.
2051377 - Unable to switch vfio-pci to netdevice in policy
2051378 - Template wizard is crashed when there are no templates existing
2051423 - migrate loadbalancers from amphora to ovn not working
2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down
2051470 - prometheus: Add validations for relabel configs
2051558 - RoleBinding in project without subject is causing "Project access" page to fail
2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page
2051583 - sriov must-gather image doesn't work
2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2051611 - Remove Check which enforces summary_interval must match logSyncInterval
2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release
2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation
2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s
2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2051954 - Allow changing of policyAuditConfig ratelimit post-deployment
2051969 - Need to build local-storage-operator-metadata-container image for 4.11
2051985 - An APIRequestCount without dots in the name can cause a panic
2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8
2052055 - Whereabouts should implement client-go 1.22+
2052056 - Static pod installer should throttle creating new revisions
2052071 - local storage operator metrics target down after upgrade
2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos
2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade
2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters
2052415 - Pod density test causing problems when using kube-burner
2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work.
2052595 - Remove dev preview badge from IBM FlashSystem deployment windows
2052618 - Node reboot causes duplicate persistent volumes
2052671 - Add Sprint 214 translations
2052674 - Remove extra spaces
2052700 - kube-controller-manger should use configmap lease
2052701 - kube-scheduler should use configmap lease
2052814 - go fmt fails in OSM after migration to go 1.17
2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker
2052953 - Observe dashboard always opens for last viewed workload instead of the selected one
2052956 - Installing virtualization operator duplicates the first action on workloads in topology
2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26
2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds"
2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13
to vmx-15
2053112 - nncp status is unknown when nnce is Progressing
2053118 - nncp Available condition reason should be exposed in oc get
2053168 - Ensure the core dynamic plugin SDK package has correct types and code
2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time
2053304 - Debug terminal no longer works in admin console
2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053334 - rhel worker scaleup playbook failed because missing some dependency of podman
2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down
2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update
2053501 - Git import detection does not happen for private repositories
2053582 - inability to detect static lifecycle failure
2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization
2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated
2053622 - PDB warning alert when CR replica count is set to zero
2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)
2053721 - When using RootDeviceHint rotational setting the host can fail to provision
2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids
2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition
2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet
2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer
2054238 - console-master-e2e-gcp-console is broken
2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal
2054319 - must-gather | gather_metallb_logs can't detect metallb pod
2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work
2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13
2054564 - DPU network operator 4.10 branch need to sync with master
2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page
2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4
2054701 - [MAPO] Events are not created for MAPO machines
2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state
2054735 - Bad link in CNV console
2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress
2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions
2054950 - A large number is showing on disk size field
2055305 - Thanos Querier high CPU and memory usage till OOM
2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition
2055433 - Unable to create br-ex as gateway is not found
2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2055492 - The default YAML on vm wizard is not latest
2055601 - installer did not destroy .app dns recored in a IPI on ASH install
2055702 - Enable Serverless tests in CI
2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set.
2055729 - NodePerfCheck fires and stays active on momentary high latency
2055814 - Custom dynamic exntension point causes runtime and compile time error
2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status
2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions
2056454 - Implement preallocated disks for oVirt in the cluster API provider
2056460 - Implement preallocated disks for oVirt in the OCP installer
2056496 - If image does not exists for builder image then upload jar form crashes
2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies
2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters
2056752 - Better to named the oc-mirror version info with more information like the oc version --client
2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect
2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed
2056893 - incorrect warning for --to-image in oc adm upgrade help
2056967 - MetalLB: speaker metrics is not updated when deleting a service
2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high
2057054 - SDK: k8s methods resolves into Response instead of the Resource
2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
2057101 - oc commands working with images print an incorrect and inappropriate warning
2057160 - configure-ovs selects wrong interface on reboot
2057183 - OperatorHub: Missing "valid subscriptions" filter
2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled
2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle
2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion
2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring"
2057495 - Alibaba Disk CSI driver does not provision small PVCs
2057558 - Marketplace operator polls too frequently for cluster operator status changes
2057633 - oc rsync reports misleading error when container is not found
2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug
2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members
2057696 - Removing console still blocks OCP install from completing
2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used
2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper
2057967 - KubeJobCompletion does not account for possible job states
2057990 - Add extra debug information to image signature workflow test
2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information
2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain
2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused
2058225 - openshift_csi_share_ metrics are not found from telemeter server
2058282 - Websockets stop updating during cluster upgrades
2058291 - CI builds should have correct version of Kube without needing to push tags everytime
2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable
2058370 - e2e-aws-driver-toolkit CI job is failing
2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2058424 - ConsolePlugin proxy always passes Authorization header even if authorize
property is omitted or false
2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created
2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root"
2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff
2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found"
2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden
2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa
2059213 - ART cannot build installer images due to missing terraform binaries for some architectures
2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)
2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect
2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override
2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages
2059654 - Dynamic demo plugin proxy example out of date
2059674 - Demo plugin fails to build
2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update
2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually
2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager
2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo
2060037 - Configure logging level of FRR containers
2060083 - CMO doesn't react to changes in clusteroperator console
2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset
2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found
2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time
2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node
2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology
2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group
2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions
2060406 - Test 'operators should not create watch channels very often' fails
2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4
2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10
2060532 - LSO e2e tests are run against default image and namespace
2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip
2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!
2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
2060583 - Remove Console internal-kubevirt plugin SDK package
2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060617 - IBMCloud destroy DNS regex not strict enough
2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus'
2060697 - [AWS] partitionNumber cannot work for specifying Partition number
2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section
2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field
2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page
2060924 - Console white-screens while using debug terminal
2060968 - Installation failing due to ironic-agent.service not starting properly
2060970 - Bump recommended FCOS to 35.20220213.3.0
2061002 - Conntrack entry is not removed for LoadBalancer IP
2061301 - Traffic Splitting Dialog is Confusing With Only One Revision
2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum
2061304 - workload info gatherer - don't serialize empty images map
2061333 - White screen for Pipeline builder page
2061447 - [GSS] local pv's are in terminating state
2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string
2061527 - [IBMCloud] infrastructure asset missing CloudProviderType
2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type
2061549 - AzureStack install with internal publishing does not create api DNS record
2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code
2061732 - Cinder CSI crashes when API is not available
2061755 - Missing breadcrumb on the resource creation page
2061833 - A single worker can be assigned to multiple baremetal hosts
2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer
2061916 - mixed ingress and egress policies can result in half-isolated pods
2061918 - Topology Sidepanel style is broken
2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet
2062007 - MCC bootstrap command lacks template flag
2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist
2062151 - Add RBAC for 'infrastructures' to operator bundle
2062355 - kubernetes-nmstate resources and logs not included in must-gathers
2062459 - Ingress pods scheduled on the same node
2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref
2062558 - Egress IP with openshift sdn in not functional on worker node.
2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload
2062645 - configure-ovs: don't restart networking if not necessary
2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric
2062849 - hw event proxy is not binding on ipv6 local address
2062920 - Project selector is too tall with only a few projects
2062998 - AWS GovCloud regions are recognized as the unknown regions
2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator
2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod
2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available
2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster
2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster
2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs
2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments
2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met
2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes
2063699 - Builds - Builds - Logs: i18n misses.
2063708 - Builds - Builds - Logs: translation correction needed.
2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)
2063732 - Workloads - StatefulSets : I18n misses
2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI
2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language.
2063756 - User Preferences - Applications - Insecure traffic : i18n misses
2063795 - Remove go-ovirt-client go.mod replace directive
2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided"
2063831 - etcd quorum pods landing on same node
2063897 - Community tasks not shown in pipeline builder page
2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server
2063938 - sing the hard coded rest-mapper in library-go
2063955 - cannot download operator catalogs due to missing images
2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language
2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod
2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain
2064239 - Virtualization Overview page turns into blank page
2064256 - The Knative traffic distribution doesn't update percentage in sidebar
2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation
2064596 - Fix the hubUrl docs link in pipeline quicksearch modal
2064607 - Pipeline builder makes too many (100+) API calls upfront
2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator
2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064705 - the alertmanagerconfig validation catches the wrong value for invalid field
2064744 - Errors trying to use the Debug Container feature
2064984 - Update error message for label limits
2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL
2065160 - Possible leak of load balancer targets on AWS Machine API Provider
2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted
2065290 - CVE-2021-23648 sanitize-url: XSS
2065338 - VolumeSnapshot creation date sorting is broken
2065507 - oc adm upgrade
should return ReleaseAccepted condition to show upgrade status.
2065510 - [AWS] failed to create cluster on ap-southeast-3
2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places
2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors
2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error
2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap
2065597 - Cinder CSI is not configurable
2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics
2065689 - Internal Image registry with GCS backend does not redirect client
2065749 - Kubelet slowly leaking memory and pods eventually unable to start
2065785 - ip-reconciler job does not complete, halts node drain
2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204
2065806 - stop considering Mint mode as supported on Azure
2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console
2065893 - [4.11] Bootimage bump tracker
2066009 - CVE-2021-44906 minimist: prototype pollution
2066232 - e2e-aws-workers-rhel8 is failing on ansible check
2066418 - [4.11] Update channels information link is taking to a 404 error page
2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names
2066457 - Prometheus CI failure: 503 Service Unavailable
2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified
2066605 - coredns template block matches cluster API to loose
2066615 - Downstream OSDK still use upstream image for Hybird type operator
2066619 - The GitCommit of the oc-mirror version
is not correct
2066665 - [ibm-vpc-block] Unable to change default storage class
2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles
2066754 - Cypress reports for core tests are not captured
2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user
2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
2066886 - openshift-apiserver pods never going NotReady
2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp
2066923 - No rule to make target 'docker-push' when building the SRO bundle
2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK
2067004 - CMO contains grafana image though grafana is removed
2067005 - Prometheus rule contains grafana though grafana is removed
2067062 - should update prometheus-operator resources version
2067064 - RoleBinding in Developer Console is dropping all subjects when editing
2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole
2067180 - Missing i18n translations
2067298 - Console 4.10 operand form refresh
2067312 - PPT event source is lost when received by the consumer
2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25
2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25
2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling
2068115 - resource tab extension fails to show up
2068148 - [4.11] /etc/redhat-release symlink is broken
2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator
2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab
2068490 - OLM descriptors integration test failing
2068538 - Crashloop back-off popover visual spacing defects
2068601 - Potential etcd inconsistent revision and data occurs
2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs
2068908 - Manual blog link change needed
2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35
2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state
2069181 - Disabling community tasks is not working
2069198 - Flaky CI test in e2e/pipeline-ci
2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog
2069312 - extend rest mappings with 'job' definition
2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services
2069577 - ConsolePlugin example proxy authorize is wrong
2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes
2069632 - Not able to download previous container logs from console
2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap
2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor
, os
and workload
2069685 - UI crashes on load if a pinned resource model does not exist
2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway"
2069740 - On-prem loadbalancer ports conflict with kube node port range
2069760 - In developer perspective divider does not show up in navigation
2069904 - Sync upstream 1.18.1 downstream
2069914 - Application Launcher groupings are not case-sensitive
2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2070000 - Add warning alerts for installing standalone k8s-nmstate
2070020 - InContext doesn't work for Event Sources
2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured
2070160 - Copy-to-clipboard and
elements cause display issues for ACM dynamic plugins 2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's 2070181 - [MAPO] serverGroupName ignored 2070457 - Image vulnerability Popover overflows from the visible area 2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes 2070703 - some ipv6 network policy tests consistently failing 2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears 2070731 - details switch label is not clickable on add page 2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled 2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability 2070805 - ClusterVersion: could not download the update 2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update 2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled 2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci 2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes 2071019 - rebase vsphere csi driver 2.5 2071021 - vsphere driver has snapshot support missing 2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong 2071139 - Ingress pods scheduled on the same node 2071364 - All image building tests are broken with " error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax 2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC) 2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console 2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType 2071617 - remove Kubevirt extensions in favour of dynamic plugin 2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO 2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs 2071700 - v1 events show "Generated from" message without the source/reporting component 2071715 - Shows 404 on Environment nav in Developer console 2071719 - OCP Console global PatternFly overrides link button whitespace 2071747 - Link to documentation from the overview page goes to a missing link 2071761 - Translation Keys Are Not Namespaced 2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable 2071859 - ovn-kube pods spec.dnsPolicy should be Default 2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name "" 2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates 2072106 - cluster-ingress-operator tests do not build on go 1.18 2072134 - Routes are not accessible within cluster from hostnet pods 2072139 - vsphere driver has permissions to create/update PV objects 2072154 - Secondary Scheduler operator panics 2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails 2072195 - machine api doesn't issue client cert when AWS DNS suffix missing 2072215 - Whereabouts ip-reconciler should be opt-in and not required 2072389 - CVO exits upgrade immediately rather than waiting for etcd backup 2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes 2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml 2072570 - The namespace titles for operator-install-single-namespace test keep changing 2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed) 2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master 2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node 2072793 - Drop "Used Filesystem" from "Virtualization -> Overview" 2072805 - Observe > Dashboards: $__range variables cause PromQL query errors 2072807 - Observe > Dashboards: Missingpanel.styles
attribute for table panels causes JS error 2072842 - (release-4.11) Gather namespace names with overlapping UID ranges 2072883 - sometimes monitoring dashboards charts can not be loaded successfully 2072891 - Update gcp-pd-csi-driver to 1.5.1; 2072911 - panic observed in kubedescheduler operator 2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial 2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system 2072998 - update aws-efs-csi-driver to the latest version 2072999 - Navigate from logs of selected Tekton task instead of last one 2073021 - [vsphere] Failed to update OS on master nodes 2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 2073176 - removing data in form does not remove data from yaml editor 2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists 2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 2073373 - Update azure-disk-csi-driver to 1.16.0 2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig 2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning 2073436 - Update azure-file-csi-driver to v1.14.0 2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls 2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add) 2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 2073522 - Update ibm-vpc-block-csi-driver to v4.2.0 2073525 - Update vpc-node-label-updater to v4.1.2 2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled 2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW 2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses 2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies 2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring 2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet 2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary 2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well 2074084 - CMO metrics not visible in the OCP webconsole UI 2074100 - CRD filtering according to name broken 2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions 2074237 - oc new-app --image-stream flag behavior is unclear 2074243 - DefaultPlacement API allow empty enum value and remove default 2074447 - cluster-dashboard: CPU Utilisation iowait and steal 2074465 - PipelineRun fails in import from Git flow if "main" branch is default 2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled 2074475 - [e2e][automation] kubevirt plugin cypress tests fail 2074483 - coreos-installer doesnt work on Dell machines 2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes 2074585 - MCG standalone deployment page goes blank when the KMS option is enabled 2074606 - occm does not have permissions to annotate SVC objects 2074612 - Operator fails to install due to service name lookup failure 2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system 2074635 - Unable to start Web Terminal after deleting existing instance 2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records 2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver 2074710 - Transition to go-ovirt-client 2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab 2074767 - Metrics page show incorrect values due to metrics level config 2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in 2074902 -oc debug node/nodename ? chroot /host somecommand
should exit with non-zero when the sub-command failed 2075015 - etcd-guard connection refused event repeating pathologically (payload blocking) 2075024 - Metal upgrades permafailing on metal3 containers crash looping 2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP 2075091 - Symptom Detection.Undiagnosed panic detected in pod 2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row) 2075149 - Trigger Translations When Extensions Are Updated 2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors 2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured 2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work 2075478 - Bump documentationBaseURL to 4.11 2075491 - nmstate operator cannot be upgraded on SNO 2075575 - Local Dev Env - Prometheus 404 Call errors spam the console 2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled 2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow 2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade 2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties 2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects 2075778 - Fix failing TestGetRegistrySamples test 2075873 - Bump recommended FCOS to 35.20220327.3.0 2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect 2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs 2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object 2076290 - PTP operator readme missing documentation on BC setup via PTP config 2076297 - Router process ignores shutdown signal while starting up 2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable 2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap 2076393 - [VSphere] survey fails to list datacenters 2076521 - Nodes in the same zone are not updated in the right order 2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast' 2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10 2076553 - Project access view replace group ref with user ref when updating their Role 2076614 - Missing Events component from the SDK API 2076637 - Configure metrics for vsphere driver to be reported 2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters 2076793 - CVO exits upgrade immediately rather than waiting for etcd backup 2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours 2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26 2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it 2076975 - Metric unset during static route conversion in configure-ovs.sh 2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI 2077050 - OCP should default to pd-ssd disk type on GCP 2077150 - Breadcrumbs on a few screens don't have correct top margin spacing 2077160 - Update owners for openshift/cluster-etcd-operator 2077357 - [release-4.11] 200ms packet delay with OVN controller turn on 2077373 - Accessibility warning on developer perspective 2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge) 2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager" 2077497 - Rebase etcd to 3.5.3 or later 2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API 2077599 - OCP should alert users if they are on vsphere version <7.0.2 2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster 2077797 - LSO pods don't have any resource requests 2077851 - "make vendor" target is not working 2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays 2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region 2078013 - drop multipathd.socket workaround 2078375 - When using the wizard with template using data source the resulting vm use pvc source 2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label 2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema: ERROR fork/exec 2078526 - Multicast breaks after master node reboot/sync 2078573 - SDN CNI -Fail to create nncp when vxlan is up 2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 2078698 - search box may not completely remove content 2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun) 2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 2078781 - PreflightValidation does not handle multiarch images 2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress 2078875 - OpenShift Installer fail to remove Neutron ports 2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml 2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema" 2078945 - Ensure only one apiserver-watcher process is active on a node. 2078954 - network-metrics-daemon makes costly global pod list calls scaling per node 2078969 - Avoid update races between old and new NTO operands during cluster upgrades 2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned 2079062 - Test for console demo plugin toast notification needs to be increased for ci testing 2079197 - [RFE] alert when more than one default storage class is detected 2079216 - Partial cluster update reference doc link returns 404 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity 2079315 - (release-4.11) Gather ODF config data with Insights 2079422 - Deprecated 1.25 API call 2079439 - OVN Pods Assigned Same IP Simultaneously 2079468 - Enhance the waitForIngressControllerCondition for better CI results 2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster 2079610 - Opeatorhub status shows errors 2079663 - change default image features in RBD storageclass 2079673 - Add flags to disable migrated code 2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config 2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster 2079788 - Operator restarts while applying the acm-ice example 2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade 2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade 2079805 - Secondary scheduler operator should comply to restricted pod security level 2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding 2079837 - [RFE] Hub/Spoke example with daemonset 2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation 2079845 - The Event Sinks catalog page now has a blank space on the left 2079869 - Builds for multiple kernel versions should be ran in parallel when possible 2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices 2079961 - The search results accordion has no spacing between it and the side navigation bar. 2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s] 2080054 - TAGS arg for installer-artifacts images is not propagated to build images 2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status 2080197 - etcd leader changes produce test churn during early stage of test 2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build 2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080379 - Group all e2e tests as parallel or serial 2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application 2080416 - oc bash-completion problem 2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load 2080446 - Sync ironic images with latest bug fixes packages 2080679 - [rebase v1.24] [sig-cli] test failure 2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel] 2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing 2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously 2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod" 2080976 - Avoid hooks config maps when hooks are empty 2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel] 2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available 2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources 2081062 - Unrevert RHCOS back to 8.6 2081067 - admin dev-console /settings/cluster should point out history may be excerpted 2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network 2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error 2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed 2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount 2081119 -oc explain
output of default overlaySize is outdated 2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects 2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames 2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field 2081562 - lifecycle.posStart hook does not have network connectivity. 2081685 - Typo in NNCE Conditions 2081743 - [e2e] tests failing 2081788 - MetalLB: the crds are not validated until metallb is deployed 2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM 2081895 - Use the managed resource (and not the manifest) for resource health checks 2081997 - disconnected insights operator remains degraded after editing pull secret 2082075 - Removing huge amount of ports takes a lot of time. 2082235 - CNO exposes a generic apiserver that apparently does nothing 2082283 - Transition to new oVirt Terraform provider 2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni 2082380 - [4.10.z] customize wizard is crashed 2082403 - [LSO] No new build local-storage-operator-metadata-container created 2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully 2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS 2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys" 2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml 2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform 2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return 2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging 2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset 2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument 2082763 - Cluster install stuck on the applying for operatorhub "cluster" 2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal 2083153 - Unable to use application credentials for Manila PVC creation on OpenStack 2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters 2083219 - DPU network operator doesn't deal with c1... inteface names 2083237 - [vsphere-ipi] Machineset scale up process delay 2083299 - SRO does not fetch mirrored DTK images in disconnected clusters 2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified 2083451 - Update external serivces URLs to console.redhat.com 2083459 - Make numvfs > totalvfs error message more verbose 2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error 2083514 - Operator ignores managementState Removed 2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service 2083756 - Linkify not upgradeable message on ClusterSettings page 2083770 - Release image signature manifest filename extension is yaml 2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities 2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors 2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form 2083999 - "--prune-over-size-limit" is not working as expected 2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11 2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface 2084124 - The Update cluster modal includes a broken link 2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run 2084280 - GCP API Checks Fail if non-required APIs are not enabled 2084288 - "alert/Watchdog must have no gaps or changes" failing after bump 2084292 - Access to dashboard resources is needed in dynamic plugin SDK 2084331 - Resource with multiple capabilities included unless all capabilities are disabled 2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data 2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster 2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri 2084463 - 5 control plane replica tests fail on ephemeral volumes 2084539 - update azure arm templates to support customer provided vnet 2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail 2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character 2084615 - Add to navigation option on search page is not properly aligned 2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass 2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10 2085187 - installer-artifacts fails to build with go 1.18 2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse 2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated 2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster 2085407 - There is no Edit link/icon for labels on Node details page 2085721 - customization controller image name is wrong 2086056 - Missing doc for OVS HW offload 2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11 2086092 - update kube to v.24 2086143 - CNO uses too much memory 2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks 2086301 - kubernetes nmstate pods are not running after creating instance 2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment 2086417 - Pipeline created from add flow has GIT Revision as required field 2086437 - EgressQoS CRD not available 2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment 2086459 - oc adm inspect fails when one of resources not exist 2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long 2086465 - External identity providers should log login attempts in the audit trail 2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance' 2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase 2086505 - Update oauth-server images to be consistent with ART 2086519 - workloads must comply to restricted security policy 2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode 2086542 - Cannot create service binding through drag and drop 2086544 - ovn-k master daemonset on hypershift shouldn't log token 2086546 - Service binding connector is not visible in the dark mode 2086718 - PowerVS destroy code does not work 2086728 - [hypershift] Move drain to controller 2086731 - Vertical pod autoscaler operator needs a 4.11 bump 2086734 - Update csi driver images to be consistent with ART 2086737 - cloud-provider-openstack rebase to kubernetes v1.24 2086754 - Cluster resource override operator needs a 4.11 bump 2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory 2086791 - Azure: Validate UltraSSD instances in multi-zone regions 2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway 2086936 - vsphere ipi should use cores by default instead of sockets 2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert 2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel 2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror 2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified 2086972 - oc-mirror does not error invalid metadata is passed to the describe command 2086974 - oc-mirror does not work with headsonly for operator 4.8 2087024 - The oc-mirror result mapping.txt is not correct , can?t be used byoc image mirror
command 2087026 - DTK's imagestream is missing from OCP 4.11 payload 2087037 - Cluster Autoscaler should use K8s 1.24 dependencies 2087039 - Machine API components should use K8s 1.24 dependencies 2087042 - Cloud providers components should use K8s 1.24 dependencies 2087084 - remove unintentional nic support 2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update 2087114 - Add simple-procfs-kmod in modprobe example in README.md 2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization 2087556 - Failed to render DPU ovnk manifests 2087579 ---keep-manifest-list=true
does not work foroc adm release new
, only pick up the linux/amd64 manifest from the manifest list 2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler 2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile 2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile 2087764 - Rewrite the registry backend will hit error 2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again 2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services 2087942 - CNO references images that are divergent from ART 2087944 - KafkaSink Node visualized incorrectly 2087983 - remove etcd_perf before restore 2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log 2088130 - oc-mirror init does not allow for automated testing 2088161 - Match dockerfile image name with the name used in the release repo 2088248 - Create HANA VM does not use values from customized HANA templates 2088304 - ose-console: enable source containers for open source requirements 2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install 2088431 - AvoidBuggyIPs field of addresspool should be removed 2088483 - oc adm catalog mirror returns 0 even if there are errors 2088489 - Topology list does not allow selecting an application group anymore (again) 2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource 2088535 - MetalLB: Enable debug log level for downstream CI 2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warningswould violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long 2088634 - oc-mirror does not fail when catalog is invalid 2088660 - Nutanix IPI installation inside container failed 2088663 - Better to change the default value of --max-per-registry to 6 2089163 - NMState CRD out of sync with code 2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster 2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting 2089254 - CAPI operator: Rotate token secret if its older than 30 minutes 2089276 - origin tests for egressIP and azure fail 2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix 2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths 2089334 - All cloud providers should use service account credentials 2089344 - Failed to deploy simple-kmod 2089350 - Rebase sdn to 1.24 2089387 - LSO not taking mpath. ignoring device 2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13 crashloops on machine-approver 2089396 - oc-mirror does not show pruned image plan 2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines 2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver 2089488 - Special resources are missing the managementState field 2089563 - Update Power VS MAPI to use api's from openshift/api repo 2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster 2089675 - Could not move Serverless Service without Revision (or while starting?) 2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster 2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks 2089687 - alert message of MCDDrainError needs to be updated for new drain controller 2089696 - CR reconciliation is stuck in daemonset lifecycle 2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply 2089719 - acm-simple-kmod fails to build 2089720 - [Hypershift] ICSP doesn't work for the guest cluster 2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive 2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages 2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances 2089805 - Config duration metrics aren't exposed 2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete 2089909 - PTP e2e testing not working on SNO cluster 2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist 2089930 - Bump OVN to 22.06 2089933 - Pods do not post readiness status on termination 2089968 - Multus CNI daemonset should use hostPath mounts with type: directory 2089973 - bump libs to k8s 1.24 for OCP 4.11 2089996 - Unnecessary yarn install runs in e2e tests 2090017 - Enable source containers to meet open source requirements 2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network 2090092 - Will hit error if specify the channel not the latest 2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready 2090178 - VM SSH command generated by UI points at api VIP 2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase 2090236 - Only reconcile annotations and status for clusters 2090266 - oc adm release extract is failing on mutli arch image 2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster 2090336 - Multus logging should be disabled prior to release 2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 2090358 - Initiating drain log message is displayed before the drain actually starts 2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials 2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z] 2090430 - gofmt code 2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool 2090437 - Bump CNO to k8s 1.24 2090465 - golang version mismatch 2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type 2090537 - failure in ovndb migration when db is not ready in HA mode 2090549 - dpu-network-operator shall be able to run on amd64 arch platform 2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD 2090627 - Git commit and branch are empty in MetalLB log 2090692 - Bump to latest 1.24 k8s release 2090730 - must-gather should include multus logs. 2090731 - nmstate deploys two instances of webhook on a single-node cluster 2090751 - oc image mirror skip-missing flag does not skip images 2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers 2090774 - Add Readme to plugin directory 2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert 2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs 2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition" 2090819 - oc-mirror does not catch invalid registry input when a namespace is specified 2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24 2090829 - Bump OpenShift router to k8s 1.24 2090838 - Flaky test: ignore flapping host interface 'tunbr' 2090843 - addLogicalPort() performance/scale optimizations 2090895 - Dynamic plugin nav extension "startsWith" property does not work 2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined 2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError 2091029 - Cancel rollout action only appears when rollout is completed 2091030 - Some BM may fail booting with default bootMode strategy 2091033 - [Descheduler]: provide ability to override included/excluded namespaces 2091087 - ODC Helm backend Owners file needs updates 2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3 2091167 - IPsec runtime enabling not work in hypershift 2091218 - Update Dev Console Helm backend to use helm 3.9.0 2091433 - Update AWS instance types 2091542 - Error Loading/404 not found page shown after clicking "Current namespace only" 2091547 - Internet connection test with proxy permanently fails 2091567 - oVirt CSI driver should use latest go-ovirt-client 2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled 2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric 2091603 - WebSocket connection restarts when switching tabs in WebTerminal 2091613 - simple-kmod fails to build due to missing KVC 2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it 2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets" 2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec' 2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options 2091854 - clusteroperator status filter doesn't match all values in Status column 2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10 2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later 2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb 2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller 2092041 - Bump cluster-dns-operator to k8s 1.24 2092042 - Bump cluster-ingress-operator to k8s 1.24 2092047 - Kube 1.24 rebase for cloud-network-config-controller 2092137 - Search doesn't show all entries when name filter is cleared 2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16 2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results 2092408 - Wrong icon is used in the virtualization overview permissions card 2092414 - In virtualization overview "running vm per templates" template list can be improved 2092442 - Minimum time between drain retries is not the expected one 2092464 - marketplace catalog defaults to v4.10 2092473 - libovsdb performance backports 2092495 - ovn: use up to 4 northd threads in non-SNO clusters 2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins 2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster 2092579 - Don't retry pod deletion if objects are not existing 2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks 2092703 - Incorrect mount propagation information in container status 2092815 - can't delete the unwanted image from registry by oc-mirror 2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds 2092867 - make repository name unique in acm-ice/acm-simple-kmod examples 2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes 2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os 2092889 - Incorrect updating of EgressACLs using direction "from-lport" 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability 2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing 2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit 2093047 - Dynamic Plugins: Generated API markdown duplicatescheckAccess
anduseAccessReview
doc 2093126 - [4.11] Bootimage bump tracker 2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade 2093288 - Default catalogs fails liveness/readiness probes 2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable 2093368 - Installer orphans FIPs created for LoadBalancer Services oncluster destroy
2093396 - Remove node-tainting for too-small MTU 2093445 - ManagementState reconciliation breaks SR 2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters 2093462 - Ingress Operator isn't reconciling the ingress cluster operator object 2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again 2093593 - Import from Devfile shows configuration options that shoudn't be there 2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding 2093600 - Project access tab should apply new permissions before it delete old ones 2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content) 2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24 2093797 - 'oc registry login' with serviceaccount function need update 2093819 - An etcd member for a new machine was never added to the cluster 2093930 - Gather console helm install totals metric 2093957 - Oc-mirror write dup metadata to registry backend 2093986 - Podsecurity violation error getting logged for pod-identity-webhook 2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig 2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips 2094039 - egressIP panics with nil pointer dereference 2094055 - Bump coreos-installer for s390x Secure Execution 2094071 - No runbook created for SouthboundStale alert 2094088 - Columns in NBDB may never be updated by OVNK 2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator 2094152 - Alerts in the virtualization overview status card aren't filtered 2094196 - Add default and validating webhooks for Power VS MAPI 2094227 - Topology: Create Service Binding should not be the last option (even under delete) 2094239 - custom pool Nodes with 0 nodes are always populated in progress bar 2094303 - If og is configured with sa, operator installation will be failed. 2094335 - [Nutanix] - debug logs are enabled by default in machine-controller 2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform 2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration 2094525 - Allow automatic upgrades for efs operator 2094532 - ovn-windows CI jobs are broken 2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run 2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character 2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s 2094801 - Kuryr controller keep restarting when handling IPs with leading zeros 2094806 - Machine API oVrit component should use K8s 1.24 dependencies 2094816 - Kuryr controller restarts when over quota 2094833 - Repository overview page does not show default PipelineRun template for developer user 2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state 2094864 - Rebase CAPG to latest changes 2094866 - oc-mirror does not always delete all manifests associated with an image during pruning 2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing 2094902 - Fix installer cross-compiling 2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters 2095049 - managed-csi StorageClass does not create PVs 2095071 - Backend tests fails after devfile registry update 2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh 2095110 - [ovn] northd container termination script must use bash 2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp 2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance 2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic 2095231 - Kafka Sink sidebar in topology is empty 2095247 - Event sink form doesn't show channel as sink until app is refreshed 2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node 2095256 - Samples Owner needs to be Updated 2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection' 2095362 - oVirt CSI driver operator should use latest go-ovirt-client 2095574 - e2e-agnostic CI job fails 2095687 - Debug Container shown for build logs and on click ui breaks 2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster 2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns 2095756 - CNO panics with concurrent map read/write 2095772 - Memory requests for ovnkube-master containers are over-sized 2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB 2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized 2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode 2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6 2096315 - NodeClockNotSynchronising alert's severity should be critical 2096350 - Web console doesn't display webhook errors for upgrades 2096352 - Collect whole journal in gather 2096380 - acm-simple-kmod references deprecated KVC example 2096392 - Topology node icons are not properly visible in Dark mode 2096394 - Add page Card items background color does not match with column background color in Dark mode 2096413 - br-ex not created due to default bond interface having a different mac address than expected 2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile 2096605 - [vsphere] no validation checking for diskType 2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups 2096855 -oc adm release new
failed with error when use an existing multi-arch release image as input 2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider 2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import 2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology 2097043 - No clean way to specify operand issues to KEDA OLM operator 2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries 2097067 - ClusterVersion history pruner does not always retain initial completed update entry 2097153 - poor performance on API call to vCenter ListTags with thousands of tags 2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects 2097239 - Change Lower CPU limits for Power VS cloud 2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support 2097260 - openshift-install create manifests failed for Power VS platform 2097276 - MetalLB CI deploys the operator via manifests and not using the csv 2097282 - chore: update external-provisioner to the latest upstream release 2097283 - chore: update external-snapshotter to the latest upstream release 2097284 - chore: update external-attacher to the latest upstream release 2097286 - chore: update node-driver-registrar to the latest upstream release 2097334 - oc plugin help shows 'kubectl' 2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11 2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook 2097454 - Placeholder bug for OCP 4.11.0 metadata release 2097503 - chore: rebase against latest external-resizer 2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading 2097607 - Add Power VS support to Webhooks tests in actuator e2e test 2097685 - Ironic-agent can't restart because of existing container 2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1 2097810 - Required Network tools missing for Testing e2e PTP 2097832 - clean up unused IPv6DualStackNoUpgrade feature gate 2097940 - openshift-install destroy cluster traps if vpcRegion not specified 2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing 2098172 - oc-mirror does not validatethe registry in the storage config 2098175 - invalid license in python-dataclasses-0.8-2.el8 spec 2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file 2098242 - typo in SRO specialresourcemodule 2098243 - Add error check to Platform create for Power VS 2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2098508 - Control-plane-machine-set-operator report panic 2098610 - No need to check the push permission with ?manifests-only option 2099293 - oVirt cluster API provider should use latest go-ovirt-client 2099330 - Edit application grouping is shown to user with view only access in a cluster 2099340 - CAPI e2e tests for AWS are missing 2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump 2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups 2099528 - Layout issue: No spacing in delete modals 2099561 - Prometheus returns HTTP 500 error on /favicon.ico 2099582 - Format and update Repository overview content 2099611 - Failures on etcd-operator watch channels 2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image 2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding) 2099668 - KubeControllerManager should degrade when GC stops working 2099695 - Update CAPG after rebase 2099751 - specialresourcemodule stacktrace while looping over build status 2099755 - EgressIP node's mgmtIP reachability configuration option 2099763 - Update icons for event sources and sinks in topology, Add page, and context menu 2099811 - UDP Packet loss in OpenShift using IPv6 [upcall] 2099821 - exporting a pointer for the loop variable 2099875 - The speaker won't start if there's another component on the host listening on 8080 2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing 2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file 2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster 2100001 - Sync upstream v1.22.0 downstream 2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator 2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment 2100038 - failure to update special-resource-lifecycle table during update Event 2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump 2100138 - release info --bugs has no differentiator between Jira and Bugzilla 2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation 2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar 2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied" 2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile 2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8 2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running 2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field 2100507 - Remove redundant log lines from obj_retry.go 2100536 - Update API to allow EgressIP node reachability check 2100601 - Update CNO to allow EgressIP node reachability check 2100643 - [Migration] [GCP]OVN can not rollback to SDN 2100644 - openshift-ansible FTBFS on RHEL8 2100669 - Telemetry should not log the full path if it contains a username 2100749 - [OCP 4.11] multipath support needs multipath modules 2100825 - Update machine-api-powervs go modules to latest version 2100841 - tiny openshift-install usability fix for setting KUBECONFIG 2101460 - An etcd member for a new machine was never added to the cluster 2101498 - Revert Bug 2082599: add upper bound to number of failed attempts 2102086 - The base image is still 4.10 for operator-sdk 1.22 2102302 - Dummy bug for 4.10 backports 2102362 - Valid regions should be allowed in GCP install config 2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster 2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption 2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install 2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root 2102947 - [VPA] recommender is logging errors for pods with init containers 2103053 - [4.11] Backport Prow CI improvements from master 2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly 2103080 - br-ex not created due to default bond interface having a different mac address than expected 2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path' 2103749 - MachineConfigPool is not getting updated 2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec 2104432 - [dpu-network-operator] Updating images to be consistent with ART 2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack 2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0 2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce 2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2104727 - Bootstrap node should honor http proxy 2104906 - Uninstall fails with Observed a panic: runtime.boundsError 2104951 - Web console doesn't display webhook errors for upgrades 2104991 - Completed pods may not be correctly cleaned up 2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds 2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied 2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history 2105167 - BuildConfig throws error when using a label with a / in it 2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial 2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator 2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18 2106051 - Unable to deploy acm-ice using latest SRO 4.11 build 2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0] 2106062 - [4.11] Bootimage bump tracker 2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc" 2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls 2106313 - bond-cni: backport bond-cni GA items to 4.11 2106543 - Typo in must-gather release-4.10 2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI 2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device 2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted 2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing 2107501 - metallb greenwave tests failure 2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found" 2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade 2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference 2108686 - rpm-ostreed: start limit hit easily 2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate 2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations 2111055 - dummy bug for 4.10.z bz2110938
- References:
https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):
1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key 2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key 2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key 2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
This advisory contains the following OpenShift Virtualization 4.11.0 images:
RHEL-8-CNV-4.11 ==============hostpath-provisioner-container-v4.11.0-21 kubevirt-tekton-tasks-operator-container-v4.11.0-29 kubevirt-template-validator-container-v4.11.0-17 bridge-marker-container-v4.11.0-26 hostpath-csi-driver-container-v4.11.0-21 cluster-network-addons-operator-container-v4.11.0-26 ovs-cni-marker-container-v4.11.0-26 virtio-win-container-v4.11.0-16 ovs-cni-plugin-container-v4.11.0-26 kubemacpool-container-v4.11.0-26 hostpath-provisioner-operator-container-v4.11.0-24 cnv-containernetworking-plugins-container-v4.11.0-26 kubevirt-ssp-operator-container-v4.11.0-54 virt-cdi-uploadserver-container-v4.11.0-59 virt-cdi-cloner-container-v4.11.0-59 virt-cdi-operator-container-v4.11.0-59 virt-cdi-importer-container-v4.11.0-59 virt-cdi-uploadproxy-container-v4.11.0-59 virt-cdi-controller-container-v4.11.0-59 virt-cdi-apiserver-container-v4.11.0-59 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7 kubevirt-tekton-tasks-copy-template-container-v4.11.0-7 checkup-framework-container-v4.11.0-67 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7 vm-network-latency-checkup-container-v4.11.0-67 kubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7 hyperconverged-cluster-webhook-container-v4.11.0-95 cnv-must-gather-container-v4.11.0-62 hyperconverged-cluster-operator-container-v4.11.0-95 kubevirt-console-plugin-container-v4.11.0-83 virt-controller-container-v4.11.0-105 virt-handler-container-v4.11.0-105 virt-operator-container-v4.11.0-105 virt-launcher-container-v4.11.0-105 virt-artifacts-server-container-v4.11.0-105 virt-api-container-v4.11.0-105 libguestfs-tools-container-v4.11.0-105 hco-bundle-registry-container-v4.11.0-587
Security Fix(es):
-
golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
-
kubeVirt: Arbitrary file read on the host from KubeVirt VMs (CVE-2022-1798)
-
golang: out-of-bounds read in golang.org/x/text/language leads to DoS (CVE-2021-38561)
-
golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937609 - VM cannot be restarted
1945593 - Live migration should be blocked for VMs with host devices
1968514 - [RFE] Add cancel migration action to virtctl
1993109 - CNV MacOS Client not signed
1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side
2001385 - no "name" label in virt-operator pod
2009793 - KBase to clarify nested support status is missing
2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate
2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)
2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation
2026357 - Migration in sequence can be reported as failed even when it succeeded
2029349 - cluster-network-addons-operator does not serve metrics through HTTPS
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2031857 - Add annotation for URL to download the image
2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2035344 - kubemacpool-mac-controller-manager not ready
2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered
2039976 - Pod stuck in "Terminating" state when removing VM with kernel boot and container disks
2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI
2041467 - [SSP] Support custom DataImportCron creating in custom namespaces
2042402 - LiveMigration with postcopy misbehave when failure occurs
2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists
2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?
2051899 - 4.11.0 containers
2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn't configure ip nat rules
2052466 - Event does not include reason for inability to live migrate
2052689 - Overhead Memory consumption calculations are incorrect
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056467 - virt-template-validator pods getting scheduled on the same node
2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long
2057310 - qemu-guest-agent does not report information due to selinux denials
2058149 - cluster-network-addons-operator deployment's MULTUS_IMAGE is pointing to brew image
2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs
2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state
2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool
2060585 - [SNO] Failed to find the virt-controller leader pod
2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled.
2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource
2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace
2063792 - No DataImportCron for CentOS 7
2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2064936 - Migration of vm from VMware reports pvc not large enough
2065014 - Feature Highlights in CNV 4.10 contains links to 4.7
2065019 - "Running VMs per template" in the new overview tab counts VMs that are not running
2066768 - [CNV-4.11-HCO] User Cannot List Resource "namespaces" in API group
2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom
2069287 - Two annotations for VM Template provider name
2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error
2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass
2070864 - non-privileged user cannot see catalog tiles
2071488 - "Migrate Node to Node" is confusing.
2071549 - [rhel-9] unable to create a non-root virt-launcher based VM
2071611 - Metrics documentation generators are missing metrics/recording rules
2071921 - Kubevirt RPM is not being built
2073669 - [rhel-9] VM fails to start
2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream
2073982 - [CNV-4.11-RHEL9] 'virtctl' binary fails with 'rc1' with 'virtctl version' command
2074337 - VM created from registry cannot be started
2075200 - VLAN filtering cannot be configured with Intel X710
2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff
2076292 - Upgrade from 4.10.1->4.11 using nightly channel, is not completing with error "could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR"
2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file
2076790 - Alert SSPDown is constantly in Firing state
2076908 - clicking on a template in the Running VMs per Template card leads to 404
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2078700 - Windows template boot source should be blank
2078703 - [RFE] Please hide the user defined password when customizing cloud-init
2078709 - VM conditions column have wrong key/values
2078728 - Common template rootDisk is not named correctly
2079366 - rootdisk is not able to edit
2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM
2079783 - Actions are broken in topology view
2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck
2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod
2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop
2080833 - Missing cloud init script editor in the scripts tab
2080835 - SSH key is set using cloud init script instead of new api
2081182 - VM SSH command generated by UI points at api VIP
2081202 - cloud-init for Windows VM generated with corrupted "undefined" section
2081409 - when viewing a common template details page, user need to see the message "can't edit common template" on all tabs
2081671 - SSH service created outside the UI is not discoverable
2081831 - [RFE] Improve disk hotplug UX
2082008 - LiveMigration fails due to loss of connection to destination host
2082164 - Migration progress timeout expects absolute progress
2082912 - [CNV-4.11] HCO Being Unable to Reconcile State
2083093 - VM overview tab is crashed
2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?
2083100 - Something keeps loading in the ?node selector? modal
2083101 - ?Restore default settings? never become available while editing CPU/Memory
2083135 - VM fails to schedule with vTPM in spec
2083256 - SSP Reconcile logging improvement when CR resources are changed
2083595 - [RFE] Disable VM descheduler if the VM is not live migratable
2084102 - [e2e] Many elements are lacking proper selector like 'data-test-id' or 'data-test'
2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails
2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field
2084431 - User credentials for ssh is not in correct format
2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab.
2091406 - wrong template namespace label when creating a vm with wizard
2091754 - Scheduling and scripts tab should be editable while the VM is running
2091755 - Change bottom "Save" to "Apply" on cloud-init script form
2091756 - The root disk of cloned template should be editable
2091758 - "OS" should be "Operating system" in template filter
2091760 - The provider should be empty if it's not set during cloning
2091761 - Miss "Edit labels" and "Edit annotations" in template kebab button
2091762 - Move notification above the tabs in template details page
2091764 - Clone a template should lead to the template details
2091765 - "Edit bootsource" is keeping in load in template actions dropdown
2091766 - "Are you sure you want to leave this page?" pops up when click the "Templates" link
2091853 - On Snapshot tab of single VM "Restore" button should move to the kebab actions together with the Delete
2091863 - BootSource edit modal should list affected templates
2091868 - Catalog list view has two columns named "BootSource"
2091889 - Devices should be editable for customize template
2091897 - username is missing in the generated ssh command
2091904 - VM is not started if adding "Authorized SSH Key" during vm creation
2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root
2091940 - SSH is not enabled in vm details after restart the VM
2091945 - delete a template should lead to templates list
2091946 - Add disk modal shows wrong units
2091982 - Got a lot of "Reconciler error" in cdi-deployment log after adding custom DataImportCron to hco
2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank
2092052 - Virtualization should be omitted in Calatog breadcrumbs
2092071 - Getting started card in Virtualization overview can not be hidden.
2092079 - Error message stays even when problematic field is dismissed
2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO
2092228 - Ensure Machine Type for new VMs is 8.6
2092230 - [RFE] Add indication/mark to deprecated template
2092306 - VM is stucking with WaitingForVolumeBinding if creating via "Boot from CD"
2092337 - os is empty in VM details page
2092359 - [e2e] data-test-id includes all pvc name
2092654 - [RFE] No obvious way to delete the ssh key from the VM
2092662 - No url example for rhel and windows template
2092663 - no hyperlink for URL example in disk source "url"
2092664 - no hyperlink to the cdi uploadproxy URL
2092781 - Details card should be removed for non admins.
2092783 - Top consumers' card should be removed for non admins.
2092787 - Operators links should be removed from Getting started card
2092789 - "Learn more about Operators" link should lead to the Red Hat documentation
2092951 - ?Edit BootSource? action should have more explicit information when disabled
2093282 - Remove links to 'all-namespaces/' for non-privileged user
2093691 - Creation flow drawer left padding is broken
2093713 - Required fields in creation flow should be highlighted if empty
2093715 - Optional parameters section in creation flow is missing bottom padding
2093716 - CPU|Memory modal button should say "Restore template settings?
2093772 - Add a service in environment it reminds a pending change in boot order
2093773 - Console crashed if adding a service without serial number
2093866 - Cannot create vm from the template vm-template-example
2093867 - OS for template 'vm-template-example' should matching the version of the image
2094202 - Cloud-init username field should have hint
2094207 - Cloud-init password field should have auto-generate option
2094208 - SSH key input is missing validation
2094217 - YAML view should reflect shanges in SSH form
2094222 - "?" icon should be placed after red asterisk in required fields
2094323 - Workload profile should be editable in template details page
2094405 - adding resource on enviornment isnt showing on disks list when vm is running
2094440 - Utilization pie charts figures are not based on current data
2094451 - PVC selection in VM creation flow does not work for non-priv user
2094453 - CD Source selection in VM creation flow is missing Upload option
2094465 - Typo in Source tooltip
2094471 - Node selector modal for non-privileged user
2094481 - Tolerations modal for non-privileged user
2094486 - Add affinity rule modal
2094491 - Affinity rules modal button
2094495 - Descheduler modal has same text in two lines
2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id
2094665 - Dedicated Resources modal for non-privileged user
2094678 - Secrets and ConfigMaps can't be added to Windows VM
2094727 - Creation flow should have VM info in header row
2094807 - hardware devices dropdown has group title even with no devices in cluster
2094813 - Cloudinit password is seen in wizard
2094848 - Details card on Overview page - 'View details' link is missing
2095125 - OS is empty in the clone modal
2095129 - "undefined" appears in rootdisk line in clone modal
2095224 - affinity modal for non-privileged users
2095529 - VM migration cancelation in kebab action should have shorter name
2095530 - Column sizes in VM list view
2095532 - Node column in VM list view is visible to non-privileged user
2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime
2095570 - Details tab of VM should not have Node info for non-privileged user
2095573 - Disks created as environment or scripts should have proper label
2095953 - VNC console controls layout
2095955 - VNC console tabs
2096166 - Template "vm-template-example" is binding with namespace "default"
2096206 - Inconsistent capitalization in Template Actions
2096208 - Templates in the catalog list is not sorted
2096263 - Incorrectly displaying units for Disks size or Memory field in various places
2096333 - virtualization overview, related operators title is not aligned
2096492 - Cannot create vm from a cloned template if its boot source is edited
2096502 - "Restore template settings" should be removed from template CPU editor
2096510 - VM can be created without any disk
2096511 - Template shows "no Boot Source" and label "Source available" at the same time
2096620 - in templates list, edit boot reference kebab action opens a modal with different title
2096781 - Remove boot source provider while edit boot source reference
2096801 - vnc thumbnail in virtual machine overview should be active on page load
2096845 - Windows template's scripts tab is crashed
2097328 - virtctl guestfs shouldn't required uid = 0
2097370 - missing titles for optional parameters in wizard customization page
2097465 - Count is not updating for 'prometheusrule' component when metrics kubevirt_hco_out_of_band_modifications_count executed
2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2098134 - "Workload profile" column is not showing completely in template list
2098135 - Workload is not showing correct in catalog after change the template's workload
2098282 - Javascript error when changing boot source of custom template to be an uploaded file
2099443 - No "Quick create virtualmachine" button for template 'vm-template-example'
2099533 - ConsoleQuickStart for HCO CR's VM is missing
2099535 - The cdi-uploadproxy certificate url should be opened in a new tab
2099539 - No storage option for upload while editing a disk
2099566 - Cloudinit should be replaced by cloud-init in all places
2099608 - "DynamicB" shows in vm-example disk size
2099633 - Doc links needs to be updated
2099639 - Remove user line from the ssh command section
2099802 - Details card link shouldn't be hard-coded
2100054 - Windows VM with WSL2 guest fails to migrate
2100284 - Virtualization overview is crashed
2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment
2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS
2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP
2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2101485 - Cloudinit should be replaced by cloud-init in all places
2101628 - non-priv user cannot load dataSource while edit template's rootdisk
2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer
2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page
2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id
2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id
2102122 - non-priv user cannot load dataSource while edit template's rootdisk
2102124 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user
2102125 - vm clone modal is displaying DV size instead of PVC size
2102127 - Cannot add NIC to VM template as non-priv user
2102129 - All templates are labeling "source available" in template list page
2102131 - The number of hardware devices is not correct in vm overview tab
2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode
2102143 - vm clone modal is displaying DV size instead of PVC size
2102256 - Add button moved to right
2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102543 - Add button moved to right
2102544 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal
2102545 - VM filter has two "Other" checkboxes which are triggered together
2104617 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed
2106175 - All pages are crashed after visit Virtualization -> Overview
2106258 - All pages are crashed after visit Virtualization -> Overview
2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions
2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics
2111562 - kubevirt plugin console crashed after visit vmi page
2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202006-0222", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.0.1" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "gitlab", "scope": "lt", "trust": 1.0, "vendor": "gitlab", "version": "13.1.2" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "universal forwarder", "scope": "eq", "trust": 1.0, "vendor": "splunk", "version": "9.1.0" }, { "model": "clustered data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "steelstore cloud integrated storage", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "universal forwarder", "scope": "gte", "trust": 1.0, "vendor": "splunk", "version": "8.2.0" }, { "model": "universal forwarder", "scope": "gte", "trust": 1.0, "vendor": "splunk", "version": "9.0.0" }, { "model": "communications cloud native core policy", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.15.0" }, { "model": "gitlab", "scope": "gte", "trust": 1.0, "vendor": "gitlab", "version": "13.0.0" }, { "model": "gitlab", "scope": "gte", "trust": 1.0, "vendor": "gitlab", "version": "13.1.0" }, { "model": "universal forwarder", "scope": "lt", "trust": 1.0, "vendor": "splunk", "version": "9.0.6" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "gitlab", "scope": "lt", "trust": 1.0, "vendor": "gitlab", "version": "12.10.13" }, { "model": "gitlab", "scope": "lt", "trust": 1.0, "vendor": "gitlab", "version": "13.0.8" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "universal forwarder", "scope": "lt", "trust": 1.0, "vendor": "splunk", "version": "8.2.12" }, { "model": "pcre", "scope": "lt", "trust": 1.0, "vendor": "pcre", "version": "8.44" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2020-14155" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:pcre:pcre:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.44", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.0.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*", "cpe_name": [], "versionEndExcluding": "13.1.2", "versionStartIncluding": "13.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*", "cpe_name": [], "versionEndExcluding": "13.1.2", "versionStartIncluding": "13.1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*", "cpe_name": [], "versionEndExcluding": "13.0.8", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*", "cpe_name": [], "versionEndExcluding": "13.0.8", "versionStartIncluding": "13.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:community:*:*:*", "cpe_name": [], "versionEndExcluding": "12.10.13", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:gitlab:gitlab:*:*:*:*:enterprise:*:*:*", "cpe_name": [], "versionEndExcluding": "12.10.13", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.15.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:steelstore_cloud_integrated_storage:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-14155" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "164927" }, { "db": "PACKETSTORM", "id": "165862" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "168392" } ], "trust": 0.7 }, "cve": "CVE-2020-14155", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "VHN-167005", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "LOW", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-14155", "trust": 1.0, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-167005", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-167005" }, { "db": "NVD", "id": "CVE-2020-14155" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "libpcre in PCRE before 8.44 allows an integer overflow via a large number after a (?C substring. PCRE is an open source regular expression library written in C language by Philip Hazel software developer. An input validation error vulnerability exists in libpcre in versions prior to PCRE 8.44. An attacker could exploit this vulnerability to execute arbitrary code or cause an application to crash on the system with a large number of requests. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-02-01-1 macOS Big Sur 11.2, Security Update 2021-001\nCatalina, Security Update 2021-001 Mojave\n\nmacOS Big Sur 11.2, Security Update 2021-001 Catalina, Security\nUpdate 2021-001 Mojave addresses the following issues. Information\nabout the security content is also available at\nhttps://support.apple.com/HT212147. \n\nAnalytics\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1761: Cees Elzinga\n\nAPFS\nAvailable for: macOS Big Sur 11.0.1\nImpact: A local user may be able to read arbitrary files\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-1797: Thomas Tempelmann\n\nCFNetwork Cache\nAvailable for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2020-27945: Zhuo Liang of Qihoo 360 Vulcan Team\n\nCoreAnimation\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious application could execute arbitrary code leading\nto compromise of user information\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-1760: @S0rryMybad of 360 Vulcan Team\n\nCoreAudio\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1747: JunDong Xie of Ant Security Light-Year Lab\n\nCoreGraphics\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted font file may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-1776: Ivan Fratric of Google Project Zero\n\nCoreMedia\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1759: Hou JingYi (@hjy79425575) of Qihoo 360 CERT\n\nCoreText\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted text file may lead to\narbitrary code execution\nDescription: A stack overflow was addressed with improved input\nvalidation. \nCVE-2021-1772: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nCoreText\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1792: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nCrash Reporter\nAvailable for: macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1761: Cees Elzinga\n\nCrash Reporter\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A local attacker may be able to elevate their privileges\nDescription: Multiple issues were addressed with improved logic. \nCVE-2021-1787: James Hutchins\n\nCrash Reporter\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A local user may be able to create or modify system files\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1786: Csaba Fitzl (@theevilbit) of Offensive Security\n\nDirectory Utility\nAvailable for: macOS Catalina 10.15.7\nImpact: A malicious application may be able to access private\ninformation\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-27937: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nEndpoint Security\nAvailable for: macOS Catalina 10.15.7\nImpact: A local attacker may be able to elevate their privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1802: Zhongcheng Li (@CK01) from WPS Security Response\nCenter\n\nFairPlay\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious application may be able to disclose kernel memory\nDescription: An out-of-bounds read issue existed that led to the\ndisclosure of kernel memory. This was addressed with improved input\nvalidation. \nCVE-2021-1791: Junzhi Lu (@pwn0rz), Qi Sun \u0026 Mickey Jin of Trend\nMicro working with Trend Micro\u2019s Zero Day Initiative\n\nFontParser\nAvailable for: macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted font may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1790: Peter Nguyen Vu Hoang of STAR Labs\n\nFontParser\nAvailable for: macOS Mojave 10.14.6\nImpact: Processing a maliciously crafted font may lead to arbitrary\ncode execution\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2021-1775: Mickey Jin and Qi Sun of Trend Micro\n\nFontParser\nAvailable for: macOS Mojave 10.14.6\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-29608: Xingwei Lin of Ant Security Light-Year Lab\n\nFontParser\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1758: Peter Nguyen of STAR Labs\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An access issue was addressed with improved memory\nmanagement. \nCVE-2021-1783: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1741: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1743: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative, Xingwei Lin of Ant Security Light-\nYear Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1773: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: An out-of-bounds read issue existed in the curl. This\nissue was addressed with improved bounds checking. \nCVE-2021-1778: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1736: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1785: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to a denial\nof service\nDescription: This issue was addressed with improved checks. \nCVE-2021-1766: Danny Rosseau of Carve Systems\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1818: Xingwei Lin from Ant-Financial Light-Year Security Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-1742: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1746: Mickey Jin \u0026 Qi Sun of Trend Micro, Xingwei Lin of Ant\nSecurity Light-Year Lab\nCVE-2021-1754: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1774: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1777: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1793: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: macOS Big Sur 11.0.1 and macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1737: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-1738: Lei Sun\nCVE-2021-1744: Xingwei Lin of Ant Security Light-Year Lab\n\nIOKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A logic error in kext loading was addressed with\nimproved state handling. \nCVE-2021-1779: Csaba Fitzl (@theevilbit) of Offensive Security\n\nIOSkywalkFamily\nAvailable for: macOS Big Sur 11.0.1\nImpact: A local attacker may be able to elevate their privileges\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1757: Pan ZhenPeng (@Peterpan0927) of Alibaba Security,\nProteas\n\nKernel\nAvailable for: macOS Catalina 10.15.7 and macOS Mojave 10.14.6\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue existed resulting in memory corruption. \nThis was addressed with improved state management. \nCVE-2020-27904: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong\nSecurity Lab\n\nKernel\nAvailable for: macOS Big Sur 11.0.1\nImpact: A remote attacker may be able to cause a denial of service\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-1764: @m00nbsd\n\nKernel\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A malicious application may be able to elevate privileges. \nApple is aware of a report that this issue may have been actively\nexploited. \nDescription: A race condition was addressed with improved locking. \nCVE-2021-1782: an anonymous researcher\n\nKernel\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: Multiple issues were addressed with improved logic. \nCVE-2021-1750: @0xalsr\n\nLogin Window\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: An attacker in a privileged network position may be able to\nbypass authentication policy\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2020-29633: Jewel Lambert of Original Spin, LLC. \n\nMessages\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A user that is removed from an iMessage group could rejoin\nthe group\nDescription: This issue was addressed with improved checks. \nCVE-2021-1771: Shreyas Ranganatha (@strawsnoceans)\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-1762: Mickey Jin of Trend Micro\n\nModel I/O\nAvailable for: macOS Catalina 10.15.7\nImpact: Processing a maliciously crafted file may lead to heap\ncorruption\nDescription: This issue was addressed with improved checks. \nCVE-2020-29614: ZhiWei Sun (@5n1p3r0010) from Topsec Alpha Lab\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2021-1763: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to heap\ncorruption\nDescription: This issue was addressed with improved checks. \nCVE-2021-1767: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-1745: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1753: Mickey Jin of Trend Micro working with Trend Micro\u2019s\nZero Day Initiative\n\nModel I/O\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-1768: Mickey Jin \u0026 Junzhi Lu of Trend Micro working with\nTrend Micro\u2019s Zero Day Initiative\n\nNetFSFramework\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: Mounting a maliciously crafted Samba network share may lead\nto arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-1751: Mikko Kentt\u00e4l\u00e4 (@Turmio_) of SensorFu\n\nOpenLDAP\nAvailable for: macOS Big Sur 11.0.1, macOS Catalina 10.15.7, and\nmacOS Mojave 10.14.6\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2020-25709\n\nPower Management\nAvailable for: macOS Mojave 10.14.6, macOS Catalina 10.15.7\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-27938: Tim Michaud (@TimGMichaud) of Leviathan\n\nScreen Sharing\nAvailable for: macOS Big Sur 11.0.1\nImpact: Multiple issues in pcre\nDescription: Multiple issues were addressed by updating to version\n8.44. \nCVE-2019-20838\nCVE-2020-14155\n\nSQLite\nAvailable for: macOS Catalina 10.15.7\nImpact: Multiple issues in SQLite\nDescription: Multiple issues were addressed by updating SQLite to\nversion 3.32.3. \nCVE-2020-15358\n\nSwift\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-1769: CodeColorist of Ant-Financial Light-Year Labs\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-1788: Francisco Alonso (@revskills)\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Maliciously crafted web content may violate iframe sandboxing\npolicy\nDescription: This issue was addressed with improved iframe sandbox\nenforcement. \nCVE-2021-1765: Eliya Stein of Confiant\nCVE-2021-1801: Eliya Stein of Confiant\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-1789: @S0rryMybad of 360 Vulcan Team\n\nWebKit\nAvailable for: macOS Big Sur 11.0.1\nImpact: A remote attacker may be able to cause arbitrary code\nexecution. Apple is aware of a report that this issue may have been\nactively exploited. \nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-1871: an anonymous researcher\nCVE-2021-1870: an anonymous researcher\n\nWebRTC\nAvailable for: macOS Big Sur 11.0.1\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A port redirection issue was addressed with additional\nport validation. \nCVE-2021-1799: Gregory Vishnepolsky \u0026 Ben Seri of Armis Security, and\nSamy Kamkar\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Junzhi Lu (@pwn0rz), Mickey Jin \u0026 Jesse\nChange of Trend Micro for their assistance. \n\nlibpthread\nWe would like to acknowledge CodeColorist of Ant-Financial Light-Year\nLabs for their assistance. \n\nLogin Window\nWe would like to acknowledge Jose Moises Romero-Villanueva of\nCrySolve for their assistance. \n\nMail Drafts\nWe would like to acknowledge Jon Bottarini of HackerOne for their\nassistance. \n\nScreen Sharing Server\nWe would like to acknowledge @gorelics for their assistance. \n\nWebRTC\nWe would like to acknowledge Philipp Hancke for their assistance. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmAYgrkACgkQZcsbuWJ6\njjATvhAAmcspGY8ZHJcSUGr9mysz5iT9oGkZcvFa8kcJsFAvFb9Wjz0M2eovBXQc\nD9bD7LrUpodiqkSobB4bEevpD9P8E/T/eRSBxjomKLv5DKHPT4eh/K2EU6R6ubVi\nGGNlT9DJrIxcTJIB2y/yfs8msV2w2/gZDLKJZP4Zh6t8G1sjI17iEaxpOph67aq2\nX0d+P7+7q1mUBa47JEQ+HIUNlfHtBL825cnmHD2Vn1WELQLKZfXBl+nPM9l9naRc\n3vYIvR7xJ5c4bqFx7N9xwGdQ5TRIoDijqADwggGwOZEiVZ7PWifj/iCLUz4Ks4hr\noGVE1UxN1oSX63D44ZQyfiyIWIiMtDV9V4J6mUoUnZ6RTTMoRRAF9DcSVF5/wmHk\nodYnMeouHc543ZyVBtdtwJ/tbuBvTOjzpNn0+UgiyRL9wG/xxQq+gB4vwgSEviek\nbBhyvdxLVWW0ULwFeN5rI5bCQBkv6BB9OSyhD6sMRrp59NAgBBS2nstZG1RAt7XL\n2KZ1GpoNcuDRLj7ElxAfeJuPM1dFVTK48SH56M1FElz/QowZVOXyKgUoaeVTUyAC\n3WOACmFAosFIclCbr8z8yGynX2bsCGBNKv4pKoHlyZCyFHCQw9L6uR2gRkOp86+M\niqHtE2L1WUZvUMCIKxfdixILEfoacSVCxr3+v4SSDOcEbSDYEIA=\n=mUkG\n-----END PGP SIGNATURE-----\n\n\n\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. \n\nSecurity Fix(es):\n\n* nodejs-immer: prototype pollution may lead to DoS or remote code\nexecution (CVE-2021-3757)\n\n* mig-controller: incorrect namespaces handling may lead to not authorized\nusage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 10\npackages that are part of the JBoss Core Services offering. \nRefer to the Release Notes for information on the most significant bug\nfixes and enhancements included in this release. \n\nSecurity Fix(es):\n\n* httpd: Single zero byte stack overflow in mod_auth_digest\n(CVE-2020-35452)\n* httpd: mod_session NULL pointer dereference in parser (CVE-2021-26690)\n* httpd: Heap overflow in mod_session (CVE-2021-26691)\n* httpd: mod_proxy_wstunnel tunneling of non Upgraded connection\n(CVE-2019-17567)\n* httpd: MergeSlashes regression (CVE-2021-30641)\n* httpd: mod_proxy NULL pointer dereference (CVE-2020-13950)\n* jbcs-httpd24-openssl: openssl: NULL pointer dereference in\nX509_issuer_and_serial_hash() (CVE-2021-23841)\n* openssl: Read buffer overruns processing ASN.1 strings (CVE-2021-3712)\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n* pcre: buffer over-read in JIT when UTF is disabled (CVE-2019-20838)\n* pcre: integer overflow in libpcre (CVE-2020-14155)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1848436 - CVE-2020-14155 pcre: Integer overflow when parsing callout numeric arguments\n1848444 - CVE-2019-20838 pcre: Buffer over-read in JIT when UTF is disabled and \\X or \\R has fixed quantifier greater than 1\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1966724 - CVE-2020-35452 httpd: Single zero byte stack overflow in mod_auth_digest\n1966729 - CVE-2021-26690 httpd: mod_session: NULL pointer dereference when parsing Cookie header\n1966732 - CVE-2021-26691 httpd: mod_session: Heap overflow via a crafted SessionHeader value\n1966738 - CVE-2020-13950 httpd: mod_proxy NULL pointer dereference\n1966740 - CVE-2019-17567 httpd: mod_proxy_wstunnel tunneling of non Upgraded connection\n1966743 - CVE-2021-30641 httpd: Unexpected URL matching with \u0027MergeSlashes OFF\u0027\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n\n6. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Bugs fixed (https://bugzilla.redhat.com/):\n\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nTRACING-2235 - Release RHOSDT 2.1\n\n6. ==========================================================================\nUbuntu Security Notice USN-5425-1\nMay 17, 2022\n\npcre3 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in PCRE. \n\nSoftware Description:\n- pcre3: Perl 5 Compatible Regular Expression Library\n\nDetails:\n\nYunho Kim discovered that PCRE incorrectly handled memory when\nhandling certain regular expressions. An attacker could possibly use\nthis issue to cause applications using PCRE to expose sensitive\ninformation. This issue only affects Ubuntu 18.04 LTS,\nUbuntu 20.04 LTS, Ubuntu 21.10 and Ubuntu 22.04 LTS. (CVE-2019-20838)\n\nIt was discovered that PCRE incorrectly handled memory when\nhandling certain regular expressions. An attacker could possibly use\nthis issue to cause applications using PCRE to have unexpected\nbehavior. This issue only affects Ubuntu 14.04 ESM, Ubuntu 16.04 ESM,\nUbuntu 18.04 LTS and Ubuntu 20.04 LTS. (CVE-2020-14155)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libpcre3 2:8.39-13ubuntu0.22.04.1\n\nUbuntu 21.10:\n libpcre3 2:8.39-13ubuntu0.21.10.1\n\nUbuntu 20.04 LTS:\n libpcre3 2:8.39-12ubuntu0.1\n\nUbuntu 18.04 LTS:\n libpcre3 2:8.39-9ubuntu0.1\n\nUbuntu 16.04 ESM:\n libpcre3 2:8.38-3.1ubuntu0.1~esm1\n\nUbuntu 14.04 ESM:\n libpcre3 1:8.31-2ubuntu2.3+esm1\n\nAfter a standard system update you need to restart applications using PCRE,\nsuch as the Apache HTTP server and Nginx, to make all the necessary\nchanges. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1 and \" ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr \" , cluster unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] - key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \" error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5: Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema: ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24] [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24] [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24] [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13 crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use an existing multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB: matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. Bugs fixed (https://bugzilla.redhat.com/):\n\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key\n2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key\n2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key\n2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. \n\nThis advisory contains the following OpenShift Virtualization 4.11.0\nimages:\n\nRHEL-8-CNV-4.11\n==============hostpath-provisioner-container-v4.11.0-21\nkubevirt-tekton-tasks-operator-container-v4.11.0-29\nkubevirt-template-validator-container-v4.11.0-17\nbridge-marker-container-v4.11.0-26\nhostpath-csi-driver-container-v4.11.0-21\ncluster-network-addons-operator-container-v4.11.0-26\novs-cni-marker-container-v4.11.0-26\nvirtio-win-container-v4.11.0-16\novs-cni-plugin-container-v4.11.0-26\nkubemacpool-container-v4.11.0-26\nhostpath-provisioner-operator-container-v4.11.0-24\ncnv-containernetworking-plugins-container-v4.11.0-26\nkubevirt-ssp-operator-container-v4.11.0-54\nvirt-cdi-uploadserver-container-v4.11.0-59\nvirt-cdi-cloner-container-v4.11.0-59\nvirt-cdi-operator-container-v4.11.0-59\nvirt-cdi-importer-container-v4.11.0-59\nvirt-cdi-uploadproxy-container-v4.11.0-59\nvirt-cdi-controller-container-v4.11.0-59\nvirt-cdi-apiserver-container-v4.11.0-59\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.0-7\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.0-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.0-7\ncheckup-framework-container-v4.11.0-67\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.0-7\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.0-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.0-7\nvm-network-latency-checkup-container-v4.11.0-67\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.0-7\nhyperconverged-cluster-webhook-container-v4.11.0-95\ncnv-must-gather-container-v4.11.0-62\nhyperconverged-cluster-operator-container-v4.11.0-95\nkubevirt-console-plugin-container-v4.11.0-83\nvirt-controller-container-v4.11.0-105\nvirt-handler-container-v4.11.0-105\nvirt-operator-container-v4.11.0-105\nvirt-launcher-container-v4.11.0-105\nvirt-artifacts-server-container-v4.11.0-105\nvirt-api-container-v4.11.0-105\nlibguestfs-tools-container-v4.11.0-105\nhco-bundle-registry-container-v4.11.0-587\n\nSecurity Fix(es):\n\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n\n* kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n(CVE-2022-1798)\n\n* golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n(CVE-2021-38561)\n\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937609 - VM cannot be restarted\n1945593 - Live migration should be blocked for VMs with host devices\n1968514 - [RFE] Add cancel migration action to virtctl\n1993109 - CNV MacOS Client not signed\n1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side\n2001385 - no \"name\" label in virt-operator pod\n2009793 - KBase to clarify nested support status is missing\n2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate\n2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)\n2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation\n2026357 - Migration in sequence can be reported as failed even when it succeeded\n2029349 - cluster-network-addons-operator does not serve metrics through HTTPS\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2031857 - Add annotation for URL to download the image\n2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2035344 - kubemacpool-mac-controller-manager not ready\n2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered\n2039976 - Pod stuck in \"Terminating\" state when removing VM with kernel boot and container disks\n2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI\n2041467 - [SSP] Support custom DataImportCron creating in custom namespaces\n2042402 - LiveMigration with postcopy misbehave when failure occurs\n2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists\n2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?\n2051899 - 4.11.0 containers\n2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn\u0027t configure ip nat rules\n2052466 - Event does not include reason for inability to live migrate\n2052689 - Overhead Memory consumption calculations are incorrect\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056467 - virt-template-validator pods getting scheduled on the same node\n2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long\n2057310 - qemu-guest-agent does not report information due to selinux denials\n2058149 - cluster-network-addons-operator deployment\u0027s MULTUS_IMAGE is pointing to brew image\n2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs\n2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state\n2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool\n2060585 - [SNO] Failed to find the virt-controller leader pod\n2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. \n2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource\n2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace\n2063792 - No DataImportCron for CentOS 7\n2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2064936 - Migration of vm from VMware reports pvc not large enough\n2065014 - Feature Highlights in CNV 4.10 contains links to 4.7\n2065019 - \"Running VMs per template\" in the new overview tab counts VMs that are not running\n2066768 - [CNV-4.11-HCO] User Cannot List Resource \"namespaces\" in API group\n2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom\n2069287 - Two annotations for VM Template provider name\n2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2070864 - non-privileged user cannot see catalog tiles\n2071488 - \"Migrate Node to Node\" is confusing. \n2071549 - [rhel-9] unable to create a non-root virt-launcher based VM\n2071611 - Metrics documentation generators are missing metrics/recording rules\n2071921 - Kubevirt RPM is not being built\n2073669 - [rhel-9] VM fails to start\n2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream\n2073982 - [CNV-4.11-RHEL9] \u0027virtctl\u0027 binary fails with \u0027rc1\u0027 with \u0027virtctl version\u0027 command\n2074337 - VM created from registry cannot be started\n2075200 - VLAN filtering cannot be configured with Intel X710\n2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff\n2076292 - Upgrade from 4.10.1-\u003e4.11 using nightly channel, is not completing with error \"could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR\"\n2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file\n2076790 - Alert SSPDown is constantly in Firing state\n2076908 - clicking on a template in the Running VMs per Template card leads to 404\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2078700 - Windows template boot source should be blank\n2078703 - [RFE] Please hide the user defined password when customizing cloud-init\n2078709 - VM conditions column have wrong key/values\n2078728 - Common template rootDisk is not named correctly\n2079366 - rootdisk is not able to edit\n2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM\n2079783 - Actions are broken in topology view\n2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck\n2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod\n2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop\n2080833 - Missing cloud init script editor in the scripts tab\n2080835 - SSH key is set using cloud init script instead of new api\n2081182 - VM SSH command generated by UI points at api VIP\n2081202 - cloud-init for Windows VM generated with corrupted \"undefined\" section\n2081409 - when viewing a common template details page, user need to see the message \"can\u0027t edit common template\" on all tabs\n2081671 - SSH service created outside the UI is not discoverable\n2081831 - [RFE] Improve disk hotplug UX\n2082008 - LiveMigration fails due to loss of connection to destination host\n2082164 - Migration progress timeout expects absolute progress\n2082912 - [CNV-4.11] HCO Being Unable to Reconcile State\n2083093 - VM overview tab is crashed\n2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?\n2083100 - Something keeps loading in the ?node selector? modal\n2083101 - ?Restore default settings? never become available while editing CPU/Memory\n2083135 - VM fails to schedule with vTPM in spec\n2083256 - SSP Reconcile logging improvement when CR resources are changed\n2083595 - [RFE] Disable VM descheduler if the VM is not live migratable\n2084102 - [e2e] Many elements are lacking proper selector like \u0027data-test-id\u0027 or \u0027data-test\u0027\n2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails\n2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field\n2084431 - User credentials for ssh is not in correct format\n2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. \n2091406 - wrong template namespace label when creating a vm with wizard\n2091754 - Scheduling and scripts tab should be editable while the VM is running\n2091755 - Change bottom \"Save\" to \"Apply\" on cloud-init script form\n2091756 - The root disk of cloned template should be editable\n2091758 - \"OS\" should be \"Operating system\" in template filter\n2091760 - The provider should be empty if it\u0027s not set during cloning\n2091761 - Miss \"Edit labels\" and \"Edit annotations\" in template kebab button\n2091762 - Move notification above the tabs in template details page\n2091764 - Clone a template should lead to the template details\n2091765 - \"Edit bootsource\" is keeping in load in template actions dropdown\n2091766 - \"Are you sure you want to leave this page?\" pops up when click the \"Templates\" link\n2091853 - On Snapshot tab of single VM \"Restore\" button should move to the kebab actions together with the Delete\n2091863 - BootSource edit modal should list affected templates\n2091868 - Catalog list view has two columns named \"BootSource\"\n2091889 - Devices should be editable for customize template\n2091897 - username is missing in the generated ssh command\n2091904 - VM is not started if adding \"Authorized SSH Key\" during vm creation\n2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root\n2091940 - SSH is not enabled in vm details after restart the VM\n2091945 - delete a template should lead to templates list\n2091946 - Add disk modal shows wrong units\n2091982 - Got a lot of \"Reconciler error\" in cdi-deployment log after adding custom DataImportCron to hco\n2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank\n2092052 - Virtualization should be omitted in Calatog breadcrumbs\n2092071 - Getting started card in Virtualization overview can not be hidden. \n2092079 - Error message stays even when problematic field is dismissed\n2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO\n2092228 - Ensure Machine Type for new VMs is 8.6\n2092230 - [RFE] Add indication/mark to deprecated template\n2092306 - VM is stucking with WaitingForVolumeBinding if creating via \"Boot from CD\"\n2092337 - os is empty in VM details page\n2092359 - [e2e] data-test-id includes all pvc name\n2092654 - [RFE] No obvious way to delete the ssh key from the VM\n2092662 - No url example for rhel and windows template\n2092663 - no hyperlink for URL example in disk source \"url\"\n2092664 - no hyperlink to the cdi uploadproxy URL\n2092781 - Details card should be removed for non admins. \n2092783 - Top consumers\u0027 card should be removed for non admins. \n2092787 - Operators links should be removed from Getting started card\n2092789 - \"Learn more about Operators\" link should lead to the Red Hat documentation\n2092951 - ?Edit BootSource? action should have more explicit information when disabled\n2093282 - Remove links to \u0027all-namespaces/\u0027 for non-privileged user\n2093691 - Creation flow drawer left padding is broken\n2093713 - Required fields in creation flow should be highlighted if empty\n2093715 - Optional parameters section in creation flow is missing bottom padding\n2093716 - CPU|Memory modal button should say \"Restore template settings?\n2093772 - Add a service in environment it reminds a pending change in boot order\n2093773 - Console crashed if adding a service without serial number\n2093866 - Cannot create vm from the template `vm-template-example`\n2093867 - OS for template \u0027vm-template-example\u0027 should matching the version of the image\n2094202 - Cloud-init username field should have hint\n2094207 - Cloud-init password field should have auto-generate option\n2094208 - SSH key input is missing validation\n2094217 - YAML view should reflect shanges in SSH form\n2094222 - \"?\" icon should be placed after red asterisk in required fields\n2094323 - Workload profile should be editable in template details page\n2094405 - adding resource on enviornment isnt showing on disks list when vm is running\n2094440 - Utilization pie charts figures are not based on current data\n2094451 - PVC selection in VM creation flow does not work for non-priv user\n2094453 - CD Source selection in VM creation flow is missing Upload option\n2094465 - Typo in Source tooltip\n2094471 - Node selector modal for non-privileged user\n2094481 - Tolerations modal for non-privileged user\n2094486 - Add affinity rule modal\n2094491 - Affinity rules modal button\n2094495 - Descheduler modal has same text in two lines\n2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id\n2094665 - Dedicated Resources modal for non-privileged user\n2094678 - Secrets and ConfigMaps can\u0027t be added to Windows VM\n2094727 - Creation flow should have VM info in header row\n2094807 - hardware devices dropdown has group title even with no devices in cluster\n2094813 - Cloudinit password is seen in wizard\n2094848 - Details card on Overview page - \u0027View details\u0027 link is missing\n2095125 - OS is empty in the clone modal\n2095129 - \"undefined\" appears in rootdisk line in clone modal\n2095224 - affinity modal for non-privileged users\n2095529 - VM migration cancelation in kebab action should have shorter name\n2095530 - Column sizes in VM list view\n2095532 - Node column in VM list view is visible to non-privileged user\n2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime\n2095570 - Details tab of VM should not have Node info for non-privileged user\n2095573 - Disks created as environment or scripts should have proper label\n2095953 - VNC console controls layout\n2095955 - VNC console tabs\n2096166 - Template \"vm-template-example\" is binding with namespace \"default\"\n2096206 - Inconsistent capitalization in Template Actions\n2096208 - Templates in the catalog list is not sorted\n2096263 - Incorrectly displaying units for Disks size or Memory field in various places\n2096333 - virtualization overview, related operators title is not aligned\n2096492 - Cannot create vm from a cloned template if its boot source is edited\n2096502 - \"Restore template settings\" should be removed from template CPU editor\n2096510 - VM can be created without any disk\n2096511 - Template shows \"no Boot Source\" and label \"Source available\" at the same time\n2096620 - in templates list, edit boot reference kebab action opens a modal with different title\n2096781 - Remove boot source provider while edit boot source reference\n2096801 - vnc thumbnail in virtual machine overview should be active on page load\n2096845 - Windows template\u0027s scripts tab is crashed\n2097328 - virtctl guestfs shouldn\u0027t required uid = 0\n2097370 - missing titles for optional parameters in wizard customization page\n2097465 - Count is not updating for \u0027prometheusrule\u0027 component when metrics kubevirt_hco_out_of_band_modifications_count executed\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2098134 - \"Workload profile\" column is not showing completely in template list\n2098135 - Workload is not showing correct in catalog after change the template\u0027s workload\n2098282 - Javascript error when changing boot source of custom template to be an uploaded file\n2099443 - No \"Quick create virtualmachine\" button for template \u0027vm-template-example\u0027\n2099533 - ConsoleQuickStart for HCO CR\u0027s VM is missing\n2099535 - The cdi-uploadproxy certificate url should be opened in a new tab\n2099539 - No storage option for upload while editing a disk\n2099566 - Cloudinit should be replaced by cloud-init in all places\n2099608 - \"DynamicB\" shows in vm-example disk size\n2099633 - Doc links needs to be updated\n2099639 - Remove user line from the ssh command section\n2099802 - Details card link shouldn\u0027t be hard-coded\n2100054 - Windows VM with WSL2 guest fails to migrate\n2100284 - Virtualization overview is crashed\n2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101485 - Cloudinit should be replaced by cloud-init in all places\n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer\n2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2102122 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2102124 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102127 - Cannot add NIC to VM template as non-priv user\n2102129 - All templates are labeling \"source available\" in template list page\n2102131 - The number of hardware devices is not correct in vm overview tab\n2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2102143 - vm clone modal is displaying DV size instead of PVC size\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102543 - Add button moved to right\n2102544 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102545 - VM filter has two \"Other\" checkboxes which are triggered together\n2104617 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106258 - All pages are crashed after visit Virtualization -\u003e Overview\n2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions\n2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111562 - kubevirt plugin console crashed after visit vmi page\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2020-14155" }, { "db": "VULHUB", "id": "VHN-167005" }, { "db": "PACKETSTORM", "id": "161245" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "164927" }, { "db": "PACKETSTORM", "id": "165862" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "167206" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "168392" } ], "trust": 1.8 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-167005", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-167005" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-14155", "trust": 2.0 }, { "db": "PACKETSTORM", "id": "161245", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "168352", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165862", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165099", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "168392", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164927", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165758", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167206", "trust": 0.2 }, { "db": "CNVD", "id": "CNVD-2020-53121", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165135", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165096", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165296", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166051", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167956", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166308", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "160545", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164928", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166489", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165287", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165631", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165002", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164825", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168036", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165209", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166309", "trust": 0.1 }, { "db": "CNNVD", "id": "CNNVD-202006-1036", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-167005", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168042", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-167005" }, { "db": "PACKETSTORM", "id": "161245" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "164927" }, { "db": "PACKETSTORM", "id": "165862" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "167206" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2020-14155" } ] }, "id": "VAR-202006-0222", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-167005" } ], "trust": 0.01 }, "last_update_date": "2024-07-23T20:28:59.964000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-190", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-167005" }, { "db": "NVD", "id": "CVE-2020-14155" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20221028-0010/" }, { "trust": 1.1, "url": "https://about.gitlab.com/releases/2020/07/01/security-release-13-1-2-release/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht211931" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212147" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2020/dec/32" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/feb/14" }, { "trust": 1.1, "url": "https://bugs.gentoo.org/717920" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.1, "url": "https://www.pcre.org/original/changelog.txt" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772%40%3cdev.mina.apache.org%3e" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29923" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1621" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://lists.apache.org/thread.html/rf9fa47ab66495c78bb4120b0754dd9531ca2ff0430f6685ac9b07772@%3cdev.mina.apache.org%3e" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1742" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1757" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1753" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27945" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1744" }, { "trust": 0.1, "url": "https://support.apple.com/ht212147." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29633" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1737" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1736" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1738" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1754" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27904" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29608" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1745" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27938" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1743" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1758" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27937" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29614" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1746" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1747" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1741" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3757" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4848" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3620" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26691" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13950" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26690" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17567" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35452" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26690" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4614" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30641" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30641" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17567" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13950" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35452" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0434" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/serverless/index" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/serverless/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39293" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38297" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/serverless/index" }, { "trust": 0.1, "url": "https://issues.jboss.org/):" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distr_tracing_install/distr-tracing-updating.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/distr_tracing/distributed-tracing-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0318" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-12ubuntu0.1" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5425-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-13ubuntu0.22.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-13ubuntu0.21.10.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/pcre3/2:8.39-9ubuntu0.1" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44225" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43818" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26945" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38593" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23648" }, { "trust": 0.1, "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4156" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5069" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29162" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30321" }, { "trust": 0.1, "url": "https://10.0.0.7:2379" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1706" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30323" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8559" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0686" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0639" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6429" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1798" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" } ], "sources": [ { "db": "VULHUB", "id": "VHN-167005" }, { "db": "PACKETSTORM", "id": "161245" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "164927" }, { "db": "PACKETSTORM", "id": "165862" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "167206" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2020-14155" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-167005" }, { "db": "PACKETSTORM", "id": "161245" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "164927" }, { "db": "PACKETSTORM", "id": "165862" }, { "db": "PACKETSTORM", "id": "165758" }, { "db": "PACKETSTORM", "id": "167206" }, { "db": "PACKETSTORM", "id": "168042" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "PACKETSTORM", "id": "168392" }, { "db": "NVD", "id": "CVE-2020-14155" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-06-15T00:00:00", "db": "VULHUB", "id": "VHN-167005" }, { "date": "2021-02-02T16:06:51", "db": "PACKETSTORM", "id": "161245" }, { "date": "2021-11-30T14:44:48", "db": "PACKETSTORM", "id": "165099" }, { "date": "2021-11-11T14:53:11", "db": "PACKETSTORM", "id": "164927" }, { "date": "2022-02-04T17:26:39", "db": "PACKETSTORM", "id": "165862" }, { "date": "2022-01-28T14:33:13", "db": "PACKETSTORM", "id": "165758" }, { "date": "2022-05-17T17:25:20", "db": "PACKETSTORM", "id": "167206" }, { "date": "2022-08-10T15:56:22", "db": "PACKETSTORM", "id": "168042" }, { "date": "2022-09-13T15:42:14", "db": "PACKETSTORM", "id": "168352" }, { "date": "2022-09-15T14:20:18", "db": "PACKETSTORM", "id": "168392" }, { "date": "2020-06-15T17:15:10.777000", "db": "NVD", "id": "CVE-2020-14155" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-12-03T00:00:00", "db": "VULHUB", "id": "VHN-167005" }, { "date": "2024-03-27T16:04:48.863000", "db": "NVD", "id": "CVE-2020-14155" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Apple Security Advisory 2021-02-01-1", "sources": [ { "db": "PACKETSTORM", "id": "161245" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution", "sources": [ { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "168352" } ], "trust": 0.2 } }
cve-2024-21990
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | NetApp | ONTAP Select Deploy administration utility |
Version: 9.12.1 |
|
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:o:netapp:clustered_data_ontap:9.12.1:-:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "clustered_data_ontap", "vendor": "netapp", "versions": [ { "status": "affected", "version": "9.12.1" } ] }, { "cpes": [ "cpe:2.3:o:netapp:clustered_data_ontap:9.13.1:-:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "clustered_data_ontap", "vendor": "netapp", "versions": [ { "status": "affected", "version": "9.13.1" } ] }, { "cpes": [ "cpe:2.3:o:netapp:clustered_data_ontap:9.14.0:-:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "clustered_data_ontap", "vendor": "netapp", "versions": [ { "status": "affected", "version": "9.14.0" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-21990", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-04-23T15:34:29.517252Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-06-04T17:37:30.646Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" }, { "providerMetadata": { "dateUpdated": "2024-08-01T22:35:34.847Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://security.netapp.com/advisory/ntap-20240411-0002/" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unaffected", "product": "ONTAP Select Deploy administration utility", "vendor": "NetApp", "versions": [ { "lessThanOrEqual": "9.14.1P2", "status": "affected", "version": "9.12.1", "versionType": "patch" } ] } ], "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "ONTAP Select Deploy administration utility versions 9.12.1.x, \n9.13.1.x and 9.14.1.x contain hard-coded credentials that could allow an\n attacker to view Deploy configuration information and modify the \naccount credentials.\n\n\n\n\u003cbr\u003e" } ], "value": "ONTAP Select Deploy administration utility versions 9.12.1.x, \n9.13.1.x and 9.14.1.x contain hard-coded credentials that could allow an\n attacker to view Deploy configuration information and modify the \naccount credentials.\n\n\n\n\n" } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "NONE", "baseScore": 5.4, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "integrityImpact": "LOW", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:N", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-259", "description": "CWE-259", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-04-17T19:35:23.599Z", "orgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "shortName": "netapp" }, "references": [ { "url": "https://security.netapp.com/advisory/ntap-20240411-0002/" } ], "source": { "advisory": "NTAP-20240411-0002", "discovery": "UNKNOWN" }, "title": "Default Privileged Account Credentials Vulnerability in ONTAP Select Deploy administration utility", "x_generator": { "engine": "Vulnogram 0.1.0-dev" } } }, "cveMetadata": { "assignerOrgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "assignerShortName": "netapp", "cveId": "CVE-2024-21990", "datePublished": "2024-04-17T19:35:23.599Z", "dateReserved": "2024-01-03T19:45:25.346Z", "dateUpdated": "2024-08-01T22:35:34.847Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2019-17272
Vulnerability from cvelistv5
▼ | URL | Tags |
---|---|---|
https://security.netapp.com/advisory/ntap-20191121-0002/ | x_refsource_CONFIRM |
Vendor | Product | Version | |
---|---|---|---|
▼ | NetApp | ONTAP Select Deploy administration utility |
Version: All versions |
|
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-05T01:33:17.328Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_refsource_CONFIRM", "x_transferred" ], "url": "https://security.netapp.com/advisory/ntap-20191121-0002/" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "product": "ONTAP Select Deploy administration utility", "vendor": "NetApp", "versions": [ { "status": "affected", "version": "All versions" } ] } ], "descriptions": [ { "lang": "en", "value": "All versions of ONTAP Select Deploy administration utility are susceptible to a vulnerability which when successfully exploited could allow an administrative user to escalate their privileges." } ], "problemTypes": [ { "descriptions": [ { "description": "Privilege Escalation", "lang": "en", "type": "text" } ] } ], "providerMetadata": { "dateUpdated": "2019-11-21T15:40:03", "orgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "shortName": "netapp" }, "references": [ { "tags": [ "x_refsource_CONFIRM" ], "url": "https://security.netapp.com/advisory/ntap-20191121-0002/" } ], "x_legacyV4Record": { "CVE_data_meta": { "ASSIGNER": "security-alert@netapp.com", "ID": "CVE-2019-17272", "STATE": "PUBLIC" }, "affects": { "vendor": { "vendor_data": [ { "product": { "product_data": [ { "product_name": "ONTAP Select Deploy administration utility", "version": { "version_data": [ { "version_value": "All versions" } ] } } ] }, "vendor_name": "NetApp" } ] } }, "data_format": "MITRE", "data_type": "CVE", "data_version": "4.0", "description": { "description_data": [ { "lang": "eng", "value": "All versions of ONTAP Select Deploy administration utility are susceptible to a vulnerability which when successfully exploited could allow an administrative user to escalate their privileges." } ] }, "problemtype": { "problemtype_data": [ { "description": [ { "lang": "eng", "value": "Privilege Escalation" } ] } ] }, "references": { "reference_data": [ { "name": "https://security.netapp.com/advisory/ntap-20191121-0002/", "refsource": "CONFIRM", "url": "https://security.netapp.com/advisory/ntap-20191121-0002/" } ] } } } }, "cveMetadata": { "assignerOrgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "assignerShortName": "netapp", "cveId": "CVE-2019-17272", "datePublished": "2019-11-21T15:40:03", "dateReserved": "2019-10-07T00:00:00", "dateUpdated": "2024-08-05T01:33:17.328Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2019-5509
Vulnerability from cvelistv5
▼ | URL | Tags |
---|---|---|
https://security.netapp.com/advisory/ntap-20191121-0001/ | x_refsource_CONFIRM |
Vendor | Product | Version | |
---|---|---|---|
▼ | NetApp | ONTAP Select Deploy administration utility |
Version: 2.11.2 through 2.12.2 |
|
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-04T20:01:50.830Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_refsource_CONFIRM", "x_transferred" ], "url": "https://security.netapp.com/advisory/ntap-20191121-0001/" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "product": "ONTAP Select Deploy administration utility", "vendor": "NetApp", "versions": [ { "status": "affected", "version": "2.11.2 through 2.12.2" } ] } ], "descriptions": [ { "lang": "en", "value": "ONTAP Select Deploy administration utility versions 2.11.2 through 2.12.2 are susceptible to a code injection vulnerability which when successfully exploited could allow an unauthenticated remote attacker to enable and use a privileged user account." } ], "problemTypes": [ { "descriptions": [ { "description": "Remote Code Injection", "lang": "en", "type": "text" } ] } ], "providerMetadata": { "dateUpdated": "2019-11-21T15:33:19", "orgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "shortName": "netapp" }, "references": [ { "tags": [ "x_refsource_CONFIRM" ], "url": "https://security.netapp.com/advisory/ntap-20191121-0001/" } ], "x_legacyV4Record": { "CVE_data_meta": { "ASSIGNER": "security-alert@netapp.com", "ID": "CVE-2019-5509", "STATE": "PUBLIC" }, "affects": { "vendor": { "vendor_data": [ { "product": { "product_data": [ { "product_name": "ONTAP Select Deploy administration utility", "version": { "version_data": [ { "version_value": "2.11.2 through 2.12.2" } ] } } ] }, "vendor_name": "NetApp" } ] } }, "data_format": "MITRE", "data_type": "CVE", "data_version": "4.0", "description": { "description_data": [ { "lang": "eng", "value": "ONTAP Select Deploy administration utility versions 2.11.2 through 2.12.2 are susceptible to a code injection vulnerability which when successfully exploited could allow an unauthenticated remote attacker to enable and use a privileged user account." } ] }, "problemtype": { "problemtype_data": [ { "description": [ { "lang": "eng", "value": "Remote Code Injection" } ] } ] }, "references": { "reference_data": [ { "name": "https://security.netapp.com/advisory/ntap-20191121-0001/", "refsource": "CONFIRM", "url": "https://security.netapp.com/advisory/ntap-20191121-0001/" } ] } } } }, "cveMetadata": { "assignerOrgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "assignerShortName": "netapp", "cveId": "CVE-2019-5509", "datePublished": "2019-11-21T15:33:19", "dateReserved": "2019-01-07T00:00:00", "dateUpdated": "2024-08-04T20:01:50.830Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-21989
Vulnerability from cvelistv5
Vendor | Product | Version | |
---|---|---|---|
▼ | NetApp | ONTAP Select Deploy administration utility |
Version: 9.12.1 |
|
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:9.12.1:*:*:*:*:*:*:*" ], "defaultStatus": "unaffected", "product": "ontap_select_deploy_administration_utility", "vendor": "netapp", "versions": [ { "lessThanOrEqual": "9.14.1p2", "status": "affected", "version": "9.12.1", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-21989", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "total" } ], "role": "CISA Coordinator", "timestamp": "2024-04-18T20:34:47.966458Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-07-23T18:50:11.927Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" }, { "providerMetadata": { "dateUpdated": "2024-08-01T22:35:34.729Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://security.netapp.com/advisory/ntap-20240411-0001/" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unaffected", "product": "ONTAP Select Deploy administration utility", "vendor": "NetApp", "versions": [ { "lessThanOrEqual": "9.14.1P2", "status": "affected", "version": "9.12.1", "versionType": "patch" } ] } ], "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "ONTAP Select Deploy administration utility versions 9.12.1.x, \n9.13.1.x and 9.14.1.x are susceptible to a vulnerability which when \nsuccessfully exploited could allow a read-only user to escalate their \nprivileges.\n\n" } ], "value": "ONTAP Select Deploy administration utility versions 9.12.1.x, \n9.13.1.x and 9.14.1.x are susceptible to a vulnerability which when \nsuccessfully exploited could allow a read-only user to escalate their \nprivileges.\n\n" } ], "metrics": [ { "cvssV3_1": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "HIGH", "baseScore": 8.1, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H", "version": "3.1" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-269", "description": "CWE-269 Improper Privilege Management", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-04-17T19:32:34.598Z", "orgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "shortName": "netapp" }, "references": [ { "url": "https://security.netapp.com/advisory/ntap-20240411-0001/" } ], "source": { "advisory": "NTAP-20240411-0001", "discovery": "UNKNOWN" }, "title": "Privilege Escalation Vulnerability in ONTAP Select Deploy administration utility", "x_generator": { "engine": "Vulnogram 0.1.0-dev" } } }, "cveMetadata": { "assignerOrgId": "11fdca00-0482-4c88-a206-37f9c182c87d", "assignerShortName": "netapp", "cveId": "CVE-2024-21989", "datePublished": "2024-04-17T19:32:34.598Z", "dateReserved": "2024-01-03T19:45:25.346Z", "dateUpdated": "2024-08-01T22:35:34.729Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }